Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automated Source Code Generation and Auto-completion Using Deep Learning: Comparing and Discussing Current Language-Model-Related Approaches (2009.07740v4)

Published 16 Sep 2020 in cs.CL, cs.LG, cs.PL, and cs.SE

Abstract: In recent years, the use of deep learning in LLMs gained much attention. Some research projects claim that they can generate text that can be interpreted as human-writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the Machine Learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the Deep-Learning-enabled LLMs approach, we detected a lack of empirical papers that compare different deep learning architectures to create and use LLMs based on programming code. This paper compares different neural network architectures like AWD-LSTMs, AWD-QRNNs, and Transformer while using transfer learning and different tokenizations to see how they behave in building LLMs using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach's different strengths and weaknesses and what gaps we find to evaluate the LLMs or apply them in a real programming context.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Juan Cruz-Benito (7 papers)
  2. Sanjay Vishwakarma (18 papers)
  3. Francisco Martin-Fernandez (5 papers)
  4. Ismael Faro (7 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.