Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models (2407.15504v2)

Published 22 Jul 2024 in cs.LG, cs.CL, cs.IT, and math.IT

Abstract: We formalize the problem of prompt compression for LLMs and present a framework to unify token-level prompt compression methods which create hard prompts for black-box models. We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via the dual of the linear program. Using the distortion-rate function as the baseline, we study the performance of existing compression schemes on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of query-aware prompt compression, where the compressor has knowledge of the downstream task/query for the black-box LLM. We show that there is a large gap between the performance of current prompt compression methods and the optimal strategy, and propose Adaptive QuerySelect, a query-aware, variable-rate adaptation of a prior work to close the gap. We extend our experiments to a small natural language dataset to further confirm our findings on our synthetic dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Adway Girish (5 papers)
  2. Alliot Nagle (6 papers)
  3. Marco Bondaschi (11 papers)
  4. Michael Gastpar (99 papers)
  5. Ashok Vardhan Makkuva (15 papers)
  6. Hyeji Kim (42 papers)
Citations (1)