Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Genetic Improvement: Scaling Stochastic Program Synthesis (2202.13040v1)

Published 26 Feb 2022 in cs.NE, cs.AI, and cs.SE

Abstract: Program synthesis aims to {\it automatically} find programs from an underlying programming language that satisfy a given specification. While this has the potential to revolutionize computing, how to search over the vast space of programs efficiently is an unsolved challenge in program synthesis. In cases where large programs are required for a solution, it is generally believed that {\it stochastic} search has advantages over other classes of search techniques. Unfortunately, existing stochastic program synthesizers do not meet this expectation very well, suffering from the scalability issue. Here we propose a new framework for stochastic program synthesis, called iterative genetic improvement to overcome this problem, a technique inspired by the practice of the software development process. The key idea of iterative genetic improvement is to apply genetic improvement to improve a current reference program, and then iteratively replace the reference program by the best program found. Compared to traditional stochastic synthesis approaches, iterative genetic improvement can build up the complexity of programs incrementally in a more robust way. We evaluate the approach on two program synthesis domains: list manipulation and string transformation. Our empirical results indicate that this method has considerable advantages over several representative stochastic program synthesizer techniques, both in terms of scalability and of solution quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yuan Yuan (234 papers)
  2. Wolfgang Banzhaf (29 papers)
Citations (3)