Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AbstractBeam: Enhancing Bottom-Up Program Synthesis using Library Learning (2405.17514v3)

Published 27 May 2024 in cs.SE, cs.AI, and cs.PL

Abstract: LambdaBeam is a state-of-the-art, execution-guided algorithm for program synthesis that utilizes higher-order functions, lambda functions, and iterative loops within a Domain-Specific Language (DSL). LambdaBeam generates each program from scratch but does not take advantage of the frequent recurrence of program blocks or subprograms commonly found in specific domains, such as loops for list traversal. To address this limitation, we introduce AbstractBeam: a novel program synthesis framework designed to enhance LambdaBeam by leveraging Library Learning. AbstractBeam identifies and integrates recurring program structures into the DSL, optimizing the synthesis process. Our experimental evaluations demonstrate that AbstractBeam statistically significantly (p < 0.05) outperforms LambdaBeam in the integer list manipulation domain. Beyond solving more tasks, AbstractBeam's program synthesis is also more efficient, requiring less time and fewer candidate programs to generate a solution. Furthermore, our findings indicate that Library Learning effectively enhances program synthesis in domains that are not explicitly designed to showcase its advantages, thereby highlighting the broader applicability of Library Learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Learning to represent programs with graphs. arXiv preprint arXiv:1711.00740, 2017.
  2. Structural language models of code. In International conference on machine learning, pages 245–256. PMLR, 2020.
  3. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989, 2016.
  4. Just-in-time learning for bottom-up enumerative synthesis. Proceedings of the ACM on Programming Languages, 4(OOPSLA):1–29, 2020.
  5. Top-down synthesis for library learning. Proceedings of the ACM on Programming Languages, 7(POPL):1182–1213, 2023.
  6. Babble: Learning better abstractions with e-graphs and anti-unification. Proceedings of the ACM on Programming Languages, 7(POPL):396–424, 2023.
  7. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021a.
  8. Execution-guided neural program synthesis. In International Conference on Learning Representations, 2018.
  9. Latent execution for neural program synthesis. arXiv preprint arXiv:2107.00101, 2021b.
  10. Alonzo Church. The calculi of lambda-conversion. Number 6. Princeton University Press, 1985.
  11. Open-ended library learning in unsupervised program synthesis. In ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference. MIT Press, 2023.
  12. Advanced Algorithms. Citeseer, 2001.
  13. Nicolaas Govert De Bruijn. A survey of the project automath. In Studies in Logic and the Foundations of Mathematics, volume 133, pages 141–161. Elsevier, 1994.
  14. Write, execute, assess: Program synthesis with a repl. Advances in Neural Information Processing Systems, 32, 2019.
  15. Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation, pages 835–850, 2021.
  16. Program synthesis. Foundations and Trends® in Programming Languages, 4(1-2):1–119, 2017.
  17. Latent programmer: Discrete latent codes for program synthesis. In International Conference on Machine Learning, pages 4308–4318. PMLR, 2021.
  18. Shapecoder: Discovering abstractions for visual programs from unstructured primitives. ACM Transactions on Graphics (TOG), 42(4):1–17, 2023.
  19. Anders Krogh. What are artificial neural networks? Nature biotechnology, 26(2):195–197, 2008.
  20. Accelerating search-based program synthesis using learned probabilistic models. ACM SIGPLAN Notices, 53(4):436–449, 2018.
  21. Neural sketch learning for conditional program generation. arXiv preprint arXiv:1703.05698, 2017.
  22. Learning to infer program sketches. In International Conference on Machine Learning, pages 4861–4870. PMLR, 2019.
  23. Bustle: bottom-up program synthesis through learning-guided exploration. arXiv preprint arXiv:2007.14381, 2020.
  24. Dreamdecompiler: Improved bayesian program learning by decompiling amortised knowledge. arXiv preprint arXiv:2306.07856, 2023.
  25. Unraveling the arc puzzle: Mimicking human solutions with object-centric decision transformer. arXiv preprint arXiv:2306.08204, 2023.
  26. On the cognitive effects of learning computer programming. New ideas in psychology, 2(2):137–168, 1984.
  27. Imitating human behaviour with diffusion models. arXiv preprint arXiv:2301.10677, 2023.
  28. Benjamin C Pierce. Types and programming languages. MIT press, 2002.
  29. Flashmeta: A framework for inductive program synthesis. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pages 107–126, 2015.
  30. Frangel: component-based synthesis with control structures. Proceedings of the ACM on Programming Languages, 3(POPL):1–29, 2019.
  31. Incremental sampling without replacement for sequence models. In International Conference on Machine Learning, pages 8785–8795. PMLR, 2020.
  32. Tf-coder: Program synthesis for tensor manipulations. ACM Transactions on Programming Languages and Systems (TOPLAS), 44(2):1–36, 2022a.
  33. Crossbeam: Learning to search in bottom-up program synthesis. arXiv preprint arXiv:2203.10452, 2022b.
  34. Lambdabeam: Neural program search with higher-order functions and lambdas. Advances in Neural Information Processing Systems, 36, 2024.
  35. Learning to combine per-example solutions for neural program synthesis. Advances in Neural Information Processing Systems, 34:6102–6114, 2021.
  36. Deep learning with PyTorch. Manning Publications, 2020.
  37. John Chong Min Tan and Mehul Motani. Large language model (llm) as a system of multiple expert agents: An approach to solve the abstraction and reasoning corpus (arc) challenge. arXiv preprint arXiv:2310.05146, 2023.
  38. Domain-specific languages: An annotated bibliography. ACM Sigplan Notices, 35(6):26–36, 2000.
  39. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017.
  40. Parsel: Algorithmic reasoning with language models by composing decompositions. Advances in Neural Information Processing Systems, 36:31466–31523, 2023.
  41. Hierarchical neural program synthesis. arXiv preprint arXiv:2303.06018, 2023.
  42. Automatic program synthesis of long programs with a learned garbage collector. Advances in neural information processing systems, 31, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Janis Zenkner (3 papers)
  2. Lukas Dierkes (1 paper)
  3. Tobias Sesterhenn (3 papers)
  4. Chrisitan Bartelt (1 paper)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com