Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Tool Use with Chain-of-Abstraction Reasoning (2401.17464v2)

Published 30 Jan 2024 in cs.CL
Efficient Tool Use with Chain-of-Abstraction Reasoning

Abstract: To achieve faithful reasoning that aligns with human expectations, LLMs need to ground their reasoning to real-world knowledge (e.g., web facts, math and physical rules). Tools help LLMs access this external knowledge, but there remains challenges for fine-tuning LLM agents (e.g., Toolformer) to invoke tools in multi-step reasoning problems, where inter-connected tool calls require holistic and efficient tool usage planning. In this work, we propose a new method for LLMs to better leverage tools in multi-step reasoning. Our method, Chain-of-Abstraction (CoA), trains LLMs to first decode reasoning chains with abstract placeholders, and then call domain tools to reify each reasoning chain by filling in specific knowledge. This planning with abstract chains enables LLMs to learn more general reasoning strategies, which are robust to shifts of domain knowledge (e.g., math results) relevant to different reasoning questions. It also allows LLMs to perform decoding and calling of external tools in parallel, which avoids the inference delay caused by waiting for tool responses. In mathematical reasoning and Wiki QA domains, we show that our method consistently outperforms previous chain-of-thought and tool-augmented baselines on both in-distribution and out-of-distribution test sets, with an average ~6% absolute QA accuracy improvement. LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1.4x faster than baseline tool-augmented LLMs.

Introduction

In an effort to elevate the capabilities of LLMs in complex reasoning tasks, recent research has introduced a novel approach titled "Chain-of-Abstraction" (CoA) reasoning. This framework is designed to refine and expedite multi-step problem-solving by utilizing abstract placeholders in reasoning chains, which are subsequently filled with precise data through domain-specific tools. This strategy contrasts markedly with existing models, where the interleaving of text generation with API calls tends to introduce significant inefficiencies.

Methodology

The key innovation of the CoA approach lies in its two-stage training process. Initially, LLMs are fine-tuned to produce reasoning chains utilizing abstract placeholders. In the ensuing phase, these constructs are 'reified' using domain-specific knowledge sourced from external tools. This decoupling of general reasoning from domain-specific knowledge facilitates a more generalized and holistic strategy, enhancing performance robustness. Furthermore, this model allows for simultaneous decoding across multiple samples, thereby improving overall inference speed.

Performance Evaluation

Applying CoA reasoning to a variety of LLM architectures, the researchers assessed its efficacy in mathematical reasoning and Wikipedia-based question answering domains. Their findings are notable: approximately 6% absolute QA accuracy improvement in comparison to traditional methods, with inference speeds about 1.4× faster. The performance gains were consistently observed across in-distribution and out-of-distribution tests, emphasizing the method's robustness. Additionally, extensive human evaluations underscored that CoA reasoning not only excels in precision but also results in approximately 8% fewer reasoning errors.

Relevance and Potential

This research paradigm introduces a shift in existing LLM methodologies, moving towards a more efficient system that separates the generation of reasoning chains from the execution of specialized knowledge operations. These findings suggest that by employing CoA reasoning, sizable improvements in both the accuracy of complex, multi-step reasoning tasks and the speed of inference can be achieved. Moreover, the method's success in both mathematical and factual domains lends credence to its versatility and adaptability to additional areas where complex reasoning is imperative. The potential impact of CoA reasoning extends to broadening the scope of LLM applications, making them more reliable and efficient partners in problem-solving across diverse knowledge domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
  2. Modern information retrieval, volume 463. ACM press New York.
  3. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533–1544.
  4. Large language models as tool makers. arXiv preprint arXiv:2305.17126.
  5. Fireact: Toward language agent fine-tuning. arXiv preprint arXiv:2310.05915.
  6. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
  7. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588.
  8. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
  9. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
  10. Pal: Program-aided language models. In International Conference on Machine Learning, pages 10764–10799. PMLR.
  11. Openagi: When llm meets domain experts. arXiv preprint arXiv:2304.04370.
  12. Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint arXiv:2305.11738.
  13. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992.
  14. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554.
  15. Metatool benchmark for large language models: Deciding whether to use tools and which to use. arXiv preprint arXiv:2310.03128.
  16. A comprehensive evaluation of tool-assisted generation strategies. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13856–13878.
  17. Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1–38.
  18. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information.
  19. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611.
  20. Mawps: A math word problem repository. In Proceedings of the 2016 conference of the north american chapter of the association for computational linguistics: human language technologies, pages 1152–1157.
  21. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466.
  22. Ml-bench: Large language models leverage open-source libraries for machine learning tasks. arXiv preprint arXiv:2311.09835.
  23. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations.
  24. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842.
  25. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919.
  26. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984.
  27. OpenAI. 2023. Gpt-4 technical report.
  28. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255.
  29. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094.
  30. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334.
  31. Kilt: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544.
  32. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992.
  33. Okapi at trec-3. Nist Special Publication Sp, 109:109.
  34. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761.
  35. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
  36. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  37. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  38. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations.
  39. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
  40. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  41. Ccnet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4003–4012.
  42. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504.
  43. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
  44. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
  45. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
  46. React: Synergizing reasoning and acting in language models.
  47. Toolchain*: Efficient action space navigation in large language models with a* search. arXiv preprint arXiv:2310.13227.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Silin Gao (17 papers)
  2. Jane Dwivedi-Yu (26 papers)
  3. Ping Yu (42 papers)
  4. Xiaoqing Ellen Tan (9 papers)
  5. Ramakanth Pasunuru (32 papers)
  6. Olga Golovneva (17 papers)
  7. Koustuv Sinha (31 papers)
  8. Asli Celikyilmaz (80 papers)
  9. Antoine Bosselut (85 papers)
  10. Tianlu Wang (33 papers)
Citations (14)
Youtube Logo Streamline Icon: https://streamlinehq.com