Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improve Language Modelling for Code Completion through Statement Level Language Model based on Statement Embedding Generated by BiLSTM (1909.11503v2)

Published 25 Sep 2019 in cs.SE

Abstract: LLMs such as RNN, LSTM or other variants have been widely used as generative models in natural language processing. In last few years, taking source code as natural languages, parsing source code into a token sequence and using a LLM such as LSTM to train that sequence are state-of-art methods to get a generative model for solving the problem of code completion. However, for source code with hundreds of statements, traditional LSTM model or attention-based LSTM model failed to capture the long term dependency of source code. In this paper, we propose a novel statement-level LLM (SLM) which uses BiLSTM to generate the embedding for each statement. The standard LSTM is adopted in SLM to iterate and accumulate the embedding of each statement in context to help predict next code. The statement level attention mechanism is also adopted in the model. The proposed model SLM is aimed at token level code completion. The experiments on inner-project and cross-project data sets indicate that the newly proposed statement-level LLM with attention mechanism (SLM) outperforms all other state-of-art models in token level code completion task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Yixiao Yang (9 papers)