Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Full-line Code Completion with Neural Language Models (2009.08603v1)

Published 18 Sep 2020 in cs.SE and cs.CL

Abstract: A code completion system suggests future code elements to developers given a partially-complete code snippet. Code completion is one of the most useful features in Integrated Development Environments (IDEs). Currently, most code completion techniques predict a single token at a time. In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token. We believe suggesting longer code sequences can further improve the efficiency of developers. Recently neural LLMs have been adopted as a preferred approach for code completion, and we believe these models can still be applied to full-line code completion with a few improvements. We conduct our experiments on two real-world python corpora and evaluate existing neural models based on source code tokens or syntactical actions. The results show that neural LLMs can achieve acceptable results on our tasks, with significant room for improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Wenhan Wang (22 papers)
  2. Sijie Shen (8 papers)
  3. Ge Li (213 papers)
  4. Zhi Jin (160 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.