Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Software Naturalness through Neural Language Models (2006.12641v2)

Published 22 Jun 2020 in cs.CL, cs.LG, and cs.PL

Abstract: The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing. We explore this hypothesis through the use of a pre-trained transformer-based LLM to perform code analysis tasks. Present approaches to code analysis depend heavily on features derived from the Abstract Syntax Tree (AST) while our transformer-based LLMs work on raw source code. This work is the first to investigate whether such LLMs can discover AST features automatically. To achieve this, we introduce a sequence labeling task that directly probes the LLMs understanding of AST. Our results show that transformer based LLMs achieve high accuracy in the AST tagging task. Furthermore, we evaluate our model on a software vulnerability identification task. Importantly, we show that our approach obtains vulnerability identification results comparable to graph based approaches that rely heavily on compilers for feature extraction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Luca Buratti (13 papers)
  2. Saurabh Pujar (14 papers)
  3. Mihaela Bornea (10 papers)
  4. Scott McCarley (6 papers)
  5. Yunhui Zheng (11 papers)
  6. Gaetano Rossiello (21 papers)
  7. Alessandro Morari (10 papers)
  8. Jim Laredo (8 papers)
  9. Veronika Thost (21 papers)
  10. Yufan Zhuang (16 papers)
  11. Giacomo Domeniconi (7 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.