Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Task Agnostic Architecture for Algorithm Induction via Implicit Composition (2404.02450v1)

Published 3 Apr 2024 in cs.LG and cs.AI

Abstract: Different fields in applied machine learning such as computer vision, speech or natural language processing have been building domain-specialised solutions. Currently, we are witnessing an opposing trend towards developing more generalist architectures, driven by LLMs and multi-modal foundational models. These architectures are designed to tackle a variety of tasks, including those previously unseen and using inputs across multiple modalities. Taking this trend of generalization to the extreme suggests the possibility of a single deep network architecture capable of solving all tasks. This position paper aims to explore developing such a unified architecture and proposes a theoretical framework of how it could be constructed. Our proposal is based on the following assumptions. Firstly, tasks are solved by following a sequence of instructions, typically implemented in code for conventional computing hardware, which inherently operates sequentially. Second, recent Generative AI, especially Transformer-based models, demonstrate potential as an architecture capable of constructing algorithms for a wide range of domains. For example, GPT-4 shows exceptional capability at in-context learning of novel tasks which is hard to explain in any other way than the ability to compose novel solutions from fragments on previously learnt algorithms. Third, the observation that the main missing component in developing a truly generalised network is an efficient approach for self-consistent input of previously learnt sub-steps of an algorithm and their (implicit) composition during the network's internal forward pass. Our exploration delves into current capabilities and limitations of Transformer-based and other methods in efficient and correct algorithm composition and proposes a Transformer-like architecture as well as a discrete learning framework to overcome these limitations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. What learning algorithm is in-context learning? investigations with linear models, 2022. URL https://arxiv.org/abs/2211.15661.
  2. Palm 2 technical report, 2023.
  3. Evaluating large language models trained on code, 2021. URL https://arxiv.org/abs/2107.03374.
  4. Boolformer: Symbolic regression of logic functions with transformers, 2023.
  5. Jonas Degrave. Building a virtual machine inside chatgpt, Dec 2022. URL https://www.engraved.blog/building-a-virtual-machine-inside/.
  6. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 610(7930):47–53, Oct 2022. ISSN 1476-4687. doi: 10.1038/s41586-022-05172-4. URL https://doi.org/10.1038/s41586-022-05172-4.
  7. Looped transformers as programmable computers, 2023. URL https://arxiv.org/abs/2301.13196.
  8. Neural turing machines, 2014. URL https://arxiv.org/abs/1410.5401.
  9. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016. ISSN 0028-0836. doi: 10.1038/nature20101.
  10. Visual programming: Compositional visual reasoning without training, 2022.
  11. Function-constrained program synthesis. In Advances in Neural Information Processing Systems, Workshop on robustness of zero/few-shot learning in foundation models, 2023.
  12. Long short-term memory. Neural Comput., 9(8):1735–1780, nov 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL https://doi.org/10.1162/neco.1997.9.8.1735.
  13. Segment anything, 2023.
  14. Competition-level code generation with AlphaCode. Science, 378(6624):1092–1097, dec 2022. doi: 10.1126/science.abq1158. URL https://doi.org/10.1126%2Fscience.abq1158.
  15. Tracr: Compiled transformers as a laboratory for interpretability, 2023. URL https://arxiv.org/abs/2301.05062.
  16. Faster sorting algorithms discovered using deep reinforcement learning. Nature, 618(7964):257–263, June 2023. ISSN 1476-4687. doi: 10.1038/s41586-023-06004-9. URL https://doi.org/10.1038/s41586-023-06004-9.
  17. Urisc: The ultimate reduced instruction set computer. International Journal of Electrical Engineering & Education, 25(4):327–334, 1988. doi: 10.1177/002072098802500408. URL https://doi.org/10.1177/002072098802500408.
  18. Progress measures for grokking via mechanistic interpretability, 2023.
  19. In-context learning and induction heads. Transformer Circuits Thread, 2022. https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
  20. Training language models to follow instructions with human feedback, 2022. URL https://arxiv.org/abs/2203.02155.
  21. Multilayer perceptron and neural networks. WSEAS Transactions on Circuits and Systems, 8, 07 2009.
  22. Mathematical discoveries from program search with large language models. Nature, 625(7995):468–475, January 2024. ISSN 1476-4687. doi: 10.1038/s41586-023-06924-6. URL https://doi.org/10.1038/s41586-023-06924-6.
  23. Toolformer: Language models can teach themselves to use tools, 2023.
  24. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018. doi: 10.1126/science.aar6404. URL https://www.science.org/doi/abs/10.1126/science.aar6404.
  25. Llama: Open and efficient foundation language models, 2023.
  26. Attention is all you need, 2017. URL https://arxiv.org/abs/1706.03762.
  27. The utility of feature reuse: Transfer learning in data-starved regimes, 2020.
  28. Voyager: An open-ended embodied agent with large language models, 2023.
  29. Thinking like transformers, 2021. URL https://arxiv.org/abs/2106.06981.
  30. Learning to execute, 2014. URL https://arxiv.org/abs/1410.4615.
  31. Teaching algorithmic reasoning via in-context learning, 2022. URL https://arxiv.org/abs/2211.09066.
  32. What algorithms can transformers learn? a study in length generalization, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sahil J. Sindhi (1 paper)
  2. Ignas Budvytis (26 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets