Papers
Topics
Authors
Recent
Search
2000 character limit reached

Chain-of-Thought Augmentation with Logit Contrast for Enhanced Reasoning in Language Models

Published 4 Jul 2024 in cs.CL | (2407.03600v2)

Abstract: Rapidly increasing model scales coupled with steering methods such as chain-of-thought prompting have led to drastic improvements in LLM reasoning. At the same time, models struggle with compositional generalization and are far from human performance on many reasoning-based benchmarks. Leveraging the success of chain-of-thought prompting, and also taking inspiration from context-aware decoding (CAD), we explore input-based contrasting methods to further encourage the type of reasoning induced by chain-of-thought prompting. While work remains to stabilize these results across datasets and models, the improvements we find warrant further investigation into input-based steering methods for context-aware reasoning.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.