Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Better Language Models of Code through Self-Improvement (2304.01228v2)

Published 2 Apr 2023 in cs.CL and cs.AI

Abstract: Pre-trained LLMs for code (PLMCs) have gained attention in recent research. These models are pre-trained on large-scale datasets using multi-modal objectives. However, fine-tuning them requires extensive supervision and is limited by the size of the dataset provided. We aim to improve this issue by proposing a simple data augmentation framework. Our framework utilizes knowledge gained during the pre-training and fine-tuning stage to generate pseudo data, which is then used as training data for the next step. We incorporate this framework into the state-of-the-art LLMs, such as CodeT5, CodeBERT, and UnixCoder. The results show that our framework significantly improves PLMCs' performance in code-related sequence generation tasks, such as code summarization and code generation in the CodeXGLUE benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hung Quoc To (2 papers)
  2. Nghi D. Q. Bui (30 papers)
  3. Jin Guo (42 papers)
  4. Tien N. Nguyen (24 papers)
Citations (13)
X Twitter Logo Streamline Icon: https://streamlinehq.com