Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

I Know Which LLM Wrote Your Code Last Summer: LLM generated Code Stylometry for Authorship Attribution (2506.17323v1)

Published 18 Jun 2025 in cs.LG, cs.AI, and cs.SE

Abstract: Detecting AI-generated code, deepfakes, and other synthetic content is an emerging research challenge. As code generated by LLMs becomes more common, identifying the specific model behind each sample is increasingly important. This paper presents the first systematic study of LLM authorship attribution for C programs. We released CodeT5-Authorship, a novel model that uses only the encoder layers from the original CodeT5 encoder-decoder architecture, discarding the decoder to focus on classification. Our model's encoder output (first token) is passed through a two-layer classification head with GELU activation and dropout, producing a probability distribution over possible authors. To evaluate our approach, we introduce LLM-AuthorBench, a benchmark of 32,000 compilable C programs generated by eight state-of-the-art LLMs across diverse tasks. We compare our model to seven traditional ML classifiers and eight fine-tuned transformer models, including BERT, RoBERTa, CodeBERT, ModernBERT, DistilBERT, DeBERTa-V3, Longformer, and LoRA-fine-tuned Qwen2-1.5B. In binary classification, our model achieves 97.56% accuracy in distinguishing C programs generated by closely related models such as GPT-4.1 and GPT-4o, and 95.40% accuracy for multi-class attribution among five leading LLMs (Gemini 2.5 Flash, Claude 3.5 Haiku, GPT-4.1, Llama 3.3, and DeepSeek-V3). To support open science, we release the CodeT5-Authorship architecture, the LLM-AuthorBench benchmark, and all relevant Google Colab scripts on GitHub: https://github.com/LLMauthorbench/.

Summary

Overview of "I Know Which LLM Wrote Your Code Last Summer: LLM-generated Code Stylometry for Authorship Attribution"

The paper "I Know Which LLM Wrote Your Code Last Summer: LLM-generated Code Stylometry for Authorship Attribution" presents an in-depth exploration of attributing authorship to code generated by LLMs. This work is situated within the broader discourse of AI-generated content detection, focusing specifically on the unique challenges of identifying indistinct stylistic imprints left by LLMs in code, particularly in the C programming language.

Research Contributions

The authors make several significant contributions, notably the introduction of the CodeT5-Authorship model and the LLM-AuthorBench dataset:

  • CodeT5-Authorship: This model adapts the CodeT5 architecture, notably stripping the decoder to focus exclusively on encoder-based classification. By leveraging the encoder's output directly through a two-layer classifier with GELU activation and dropout, it generates a probability distribution among potential model authors, thus enhancing attribution accuracy.
  • LLM-AuthorBench: The dataset is an extensive collection of 32,000 C programs compiled from code produced by eight diverse LLMs. These include established models such as GPT-4.1 and GPT-4o, Gemini 2.5 Flash, Claude 3.5 Haiku, Llama 3.3, and DeepSeek-V3. The benchmark is carefully curated to assure model diversity and programming task variety, providing a robust foundation for evaluating LLM authorship attribution.

Experimental Evaluation

The research evaluates the performance of the CodeT5-Authorship model against seven traditional machine learning classifiers and eight fine-tuned transformer models such as BERT, RoBERTa, and Longformer. Key findings from their experiments indicate:

  • Binary Classification: CodeT5-Authorship demonstrates an impressive 97.56% accuracy in distinguishing between closely related LLMs like GPT-4.1 and GPT-4o, underscoring the subtle but distinguishable stylistic signatures of these models.
  • Multi-Class Attribution: The model also achieves 95.40% accuracy in attributing C code to one of five leading LLMs. This demonstrates the potential for detecting the unique styles of various models despite their similar training corpuses.

Methodological Insights

The paper highlights the efficacy of stylometric analysis in model-specific authorship attribution, particularly through:

  • Stylometry and Machine Learning: The integration of lexical metrics, syntactic analysis, and comment density stands out as a critical factor in enhancing attribution accuracy.
  • Comparison with Other Approaches: CodeT5-Authorship competes favorably with classical ML approaches and other transformer-based models. Its encoder-only setup offers a computational advantage without sacrificing precision, suggesting a promising direction for future model development.

Implications and Future Directions

The implications of this paper are significant for digital forensics, academic integrity, and software supply chain security. By pushing beyond human-vs-LLM detection to discerning specific model footprints, the research lays the groundwork for more sophisticated accountability mechanisms and tracing capabilities in AI-assisted software development environments.

Future research directions might explore:

  • Cross-Language Attribution: Extending the methodology to other languages can further validate and refine techniques for cross-LLM attribution.
  • Adversarial Robustness: Evaluating model resilience to code obfuscations or intentional tampering could enhance practical applicability.
  • Scalability: Investigating the method’s performance and scalability in larger and more varied LLM landscapes offers potential for wider adoption and integration.

This research exemplifies a robust approach to addressing the emerging challenges and nuances of LLM-generated content, crucial for maintaining transparency and security in a rapidly evolving technological landscape.

Github Logo Streamline Icon: https://streamlinehq.com