Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Modeling Human Attention from Eye Movements for Neural Source Code Summarization (2305.09773v1)

Published 16 May 2023 in cs.SE and cs.AI

Abstract: Neural source code summarization is the task of generating natural language descriptions of source code behavior using neural networks. A fundamental component of most neural models is an attention mechanism. The attention mechanism learns to connect features in source code to specific words to use when generating natural language descriptions. Humans also pay attention to some features in code more than others. This human attention reflects experience and high-level cognition well beyond the capability of any current neural model. In this paper, we use data from published eye-tracking experiments to create a model of this human attention. The model predicts which words in source code are the most important for code summarization. Next, we augment a baseline neural code summarization approach using our model of human attention. We observe an improvement in prediction performance of the augmented approach in line with other bio-inspired neural models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Aakash Bansal (22 papers)
  2. Bonita Sharif (5 papers)
  3. Collin McMillan (38 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.