Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints (2210.03251v2)

Published 6 Oct 2022 in cs.CL

Abstract: Autocomplete is a task where the user inputs a piece of text, termed prompt, which is conditioned by the model to generate semantically coherent continuation. Existing works for this task have primarily focused on datasets (e.g., email, chat) with high frequency user prompt patterns (or focused prompts) where word-based LLMs have been quite effective. In this work, we study the more challenging open-domain setting consisting of low frequency user prompt patterns (or broad prompts, e.g., prompt about 93rd academy awards) and demonstrate the effectiveness of character-based LLMs. We study this problem under memory-constrained settings (e.g., edge devices and smartphones), where character-based representation is effective in reducing the overall model size (in terms of parameters). We use WikiText-103 benchmark to simulate broad prompts and demonstrate that character models rival word models in exact match accuracy for the autocomplete task, when controlled for the model size. For instance, we show that a 20M parameter character model performs similar to an 80M parameter word model in the vanilla setting. We further propose novel methods to improve character models by incorporating inductive bias in the form of compositional information and representation transfer from large word models. Datasets and code used in this work are available at https://github.com/UBC-NLP/char_autocomplete.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ganesh Jawahar (11 papers)
  2. Subhabrata Mukherjee (59 papers)
  3. Debadeepta Dey (32 papers)
  4. Muhammad Abdul-Mageed (102 papers)
  5. Laks V. S. Lakshmanan (58 papers)
  6. Gustavo Henrique de Rosa (5 papers)
  7. Shital Shah (16 papers)
  8. Caio Cesar Teodoro Mendes (2 papers)