Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Learning (Dis)-Similarity of Source Code from Program Contrasts (2110.03868v2)

Published 8 Oct 2021 in cs.PL, cs.AI, cs.LG, and cs.SE

Abstract: Understanding the functional (dis)-similarity of source code is significant for code modeling tasks such as software vulnerability and code clone detection. We present DISCO(DIS-similarity of COde), a novel self-supervised model focusing on identifying (dis)similar functionalities of source code. Different from existing works, our approach does not require a huge amount of randomly collected datasets. Rather, we design structure-guided code transformation algorithms to generate synthetic code clones and inject real-world security bugs, augmenting the collected datasets in a targeted way. We propose to pre-train the Transformer model with such automatically generated program contrasts to better identify similar code in the wild and differentiate vulnerable programs from benign ones. To better capture the structural features of source code, we propose a new cloze objective to encode the local tree-based context (e.g., parents or sibling nodes). We pre-train our model with a much smaller dataset, the size of which is only 5% of the state-of-the-art models' training datasets, to illustrate the effectiveness of our data augmentation and the pre-training approach. The evaluation shows that, even with much less data, DISCO can still outperform the state-of-the-art models in vulnerability and code clone detection tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yangruibo Ding (17 papers)
  2. Luca Buratti (13 papers)
  3. Saurabh Pujar (14 papers)
  4. Alessandro Morari (10 papers)
  5. Baishakhi Ray (88 papers)
  6. Saikat Chakraborty (62 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.