Papers
Topics
Authors
Recent
2000 character limit reached

Synthesizing Traffic Datasets using Graph Neural Networks (2312.05031v1)

Published 8 Dec 2023 in cs.CV and cs.LG

Abstract: Traffic congestion in urban areas presents significant challenges, and Intelligent Transportation Systems (ITS) have sought to address these via automated and adaptive controls. However, these systems often struggle to transfer simulated experiences to real-world scenarios. This paper introduces a novel methodology for bridging this `sim-real' gap by creating photorealistic images from 2D traffic simulations and recorded junction footage. We propose a novel image generation approach, integrating a Conditional Generative Adversarial Network with a Graph Neural Network (GNN) to facilitate the creation of realistic urban traffic images. We harness GNNs' ability to process information at different levels of abstraction alongside segmented images for preserving locality data. The presented architecture leverages the power of SPADE and Graph ATtention (GAT) network models to create images based on simulated traffic scenarios. These images are conditioned by factors such as entity positions, colors, and time of day. The uniqueness of our approach lies in its ability to effectively translate structured and human-readable conditions, encoded as graphs, into realistic images. This advancement contributes to applications requiring rich traffic image datasets, from data augmentation to urban traffic solutions. We further provide an application to test the model's capabilities, including generating images with manually defined positions for various entities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. A. Agrawal and R. Paulus, “Intelligent traffic light design and control in smart cities: A survey on techniques and methodologies,” International Journal of Vehicle Information and Communication Systems, vol. 5, no. 4, pp. 436–481, 2020.
  2. M. Behrisch, L. Bieker, J. Erdmann, and D. Krajzewicz, “Sumo–simulation of urban mobility: an overview,” in Proceedings of SIMUL 2011, The Third International Conference on Advances in System Simulation.   ThinkMind, 2011.
  3. D. Garg, M. Chli, and G. Vogiatzis, “Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality,” Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 1, pp. 454–462, 2022.
  4. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 23–30.
  5. G. Adaimi, S. Kreiss, and A. Alahi, “Perceiving traffic from aerial images,” ArXiv, vol. abs/2009.07611, 2020.
  6. E. Barmpounakis, G. M. Sauvin, and N. Geroliminis, “Lane Detection and Lane-Changing Identification with High-Resolution Data from a Swarm of Drones,” Transportation Research Record, vol. 2674, no. 7, pp. 1–15, 2020.
  7. J. Fernández, J. M. Cañas, V. Fernández, and S. Paniego, “Robust Real-Time Traffic Surveillance with Deep Learning,” Computational Intelligence and Neuroscience, vol. 2021, 2021.
  8. R. B. Arantes, G. Vogiatzis, and D. R. Faria, “Csc-gan: Cycle and semantic consistency for dataset augmentation,” in Advances in Visual Computing: 15th International Symposium, ISVC 2020, San Diego, CA, USA, October 5–7, 2020, Proceedings, Part I 15.   Springer, 2020, pp. 170–181.
  9. K. D. B. Mudavathu, M. C. S. Rao, and K. Ramana, “Auxiliary conditional generative adversarial networks for image data set augmentation,” in 2018 3rd International Conference on Inventive Computation Technologies (ICICT).   IEEE, 2018, pp. 263–269.
  10. A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 461–11 471.
  11. R. Nakano, “A discussion of ’adversarial examples are not bugs, they are features’: Adversarially robust neural style transfer,” Distill, 2019, https://distill.pub/2019/advex-bugs-discussion/response-4.
  12. R. Maini and H. Aggarwal, “A comprehensive review of image enhancement techniques,” CoRR, vol. abs/1003.4053, 2010. [Online]. Available: http://arxiv.org/abs/1003.4053
  13. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  14. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2022.
  15. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851, 2020.
  16. T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu, “Semantic image synthesis with spatially-adaptive normalization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  17. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
  18. J. Oppenlaender, “The creativity of text-to-image generation,” in Proceedings of the 25th International Academic Mindtrek Conference, 2022, pp. 192–202.
  19. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in International Conference on Machine Learning.   PMLR, 2021, pp. 8821–8831.
  20. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rJXMpikCZ
  21. D. Rodriguez-Criado, P. Bachiller, and L. J. Manso, “Generation of human-aware navigation maps using graph neural networks,” in Artificial Intelligence XXXVIII: 41st SGAI International Conference on Artificial Intelligence, AI 2021, Cambridge, UK, December 14–16, 2021, Proceedings 41.   Springer, 2021, pp. 19–32.
  22. C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint arXiv:2207.02696, 2022.
  23. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 2016, pp. 770–778.
  24. T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8798–8807.
  25. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” Advances in neural information processing systems, vol. 30, 2017.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.