Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CharED: Character-wise Ensemble Decoding for Large Language Models (2407.11009v1)

Published 25 Jun 2024 in cs.CL and cs.LG

Abstract: LLMs have shown remarkable potential for problem solving, with open source models achieving increasingly impressive performance on benchmarks measuring areas from logical reasoning to mathematical ability. Ensembling models can further improve capabilities across a variety of domains. However, conventional methods of combining models at inference time such as shallow fusion necessitate a shared vocabulary and tokenization, and alternatives like fine-tuning for domain-specific performance are both time consuming and computationally expensive. We therefore present an inference-time ensembling algorithm aimed at "averaging" outputs from multiple LLMs and illustrate its improved performance across multiple domains compared to its constituent models alone. Character-wise ensemble decoding, CharED, finds the marginal distribution of each character for an individual model and performs a weighted average to generate an output, character by character. In coding, math, and toxicity benchmarks, we find our proposed model able to combine complimentary strengths of multiple LLMs, regardless of vocabulary, tokenization, or model size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kevin Gu (2 papers)
  2. Eva Tuecke (2 papers)
  3. Dmitriy Katz (7 papers)
  4. Raya Horesh (10 papers)
  5. David Alvarez-Melis (48 papers)
  6. Mikhail Yurochkin (68 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets