Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Code Semantics: An Evaluation of Transformer Models in Summarization (2310.16314v2)

Published 25 Oct 2023 in cs.LG

Abstract: This paper delves into the intricacies of code summarization using advanced transformer-based LLMs. Through empirical studies, we evaluate the efficacy of code summarization by altering function and variable names to explore whether models truly understand code semantics or merely rely on textual cues. We have also introduced adversaries like dead code and commented code across three programming languages (Python, Javascript, and Java) to further scrutinize the model's understanding. Ultimately, our research aims to offer valuable insights into the inner workings of transformer-based LMs, enhancing their ability to understand code and contributing to more efficient software development practices and maintenance workflows.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Debanjan Mondal (3 papers)
  2. Abhilasha Lodha (3 papers)
  3. Ankita Sahoo (1 paper)
  4. Beena Kumari (1 paper)