Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 91 tok/s
Gemini 3.0 Pro 46 tok/s Pro
Gemini 2.5 Flash 148 tok/s Pro
Kimi K2 170 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

What do LLMs need to Synthesize Correct Router Configurations? (2307.04945v1)

Published 11 Jul 2023 in cs.NI and cs.PL

Abstract: We investigate whether LLMs (e.g., GPT-4) can synthesize correct router configurations with reduced manual effort. We find GPT-4 works very badly by itself, producing promising draft configurations but with egregious errors in topology, syntax, and semantics. Our strategy, that we call Verified Prompt Programming, is to combine GPT-4 with verifiers, and use localized feedback from the verifier to automatically correct errors. Verification requires a specification and actionable localized feedback to be effective. We show results for two use cases: translating from Cisco to Juniper configurations on a single router, and implementing no-transit policy on multiple routers. While human input is still required, if we define the leverage as the number of automated prompts to the number of human prompts, our experiments show a leverage of 10X for Juniper translation, and 6X for implementing no-transit policy, ending with verified configurations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. B. Bollobas. The man who taught infinity: how G.H. Hardy tamed Srinivasa Ramanujan’s genius. https://theconversation.com/the-man-who-taught-infinity-how-gh-hardy -tamed-srinivasa-ramanujans-genius-57585, 2023.
  2. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA, 2021. Association for Computing Machinery.
  3. Sparks of artificial general intelligence: Early experiments with GPT-4, 2023.
  4. DeepLearning.AI. ChatGPT Prompt Engineering for Developers. https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction, 2023.
  5. A general approach to network configuration analysis. NSDI’15, page 469–483, USA, 2015. USENIX Association.
  6. github. Github CoPilot: Your AI Pair Programmer. https://github.com/features/copilot, 2023.
  7. Jigsaw: Large language models meet program synthesis. In Proceedings of the 44th International Conference on Software Engineering, ICSE ’22, page 1219–1231, New York, NY, USA, 2022. Association for Computing Machinery.
  8. Kani Rust Verifier Blog. Writing Code with ChatGPT? Improve it with Kani. https://model-checking.github.io/kani-verifier-blog/2023/05/01/writing-code-with-chatgpt-improve-it-with-kani.html, 2023.
  9. Lightyear: Using modularity to scale BGP control plane verification. SIGCOMM ’23, to appear. Association for Computing Machinery, 2023.
  10. Campion: Debugging router configuration differences. SIGCOMM ’21, page 748–761, New York, NY, US, 2021. Association for Computing Machinery.
Citations (28)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.