Overview of "I Know Which LLM Wrote Your Code Last Summer: LLM-generated Code Stylometry for Authorship Attribution"
The paper "I Know Which LLM Wrote Your Code Last Summer: LLM-generated Code Stylometry for Authorship Attribution" presents an in-depth exploration of attributing authorship to code generated by LLMs. This work is situated within the broader discourse of AI-generated content detection, focusing specifically on the unique challenges of identifying indistinct stylistic imprints left by LLMs in code, particularly in the C programming language.
Research Contributions
The authors make several significant contributions, notably the introduction of the CodeT5-Authorship model and the LLM-AuthorBench dataset:
- CodeT5-Authorship: This model adapts the CodeT5 architecture, notably stripping the decoder to focus exclusively on encoder-based classification. By leveraging the encoder's output directly through a two-layer classifier with GELU activation and dropout, it generates a probability distribution among potential model authors, thus enhancing attribution accuracy.
- LLM-AuthorBench: The dataset is an extensive collection of 32,000 C programs compiled from code produced by eight diverse LLMs. These include established models such as GPT-4.1 and GPT-4o, Gemini 2.5 Flash, Claude 3.5 Haiku, Llama 3.3, and DeepSeek-V3. The benchmark is carefully curated to assure model diversity and programming task variety, providing a robust foundation for evaluating LLM authorship attribution.
Experimental Evaluation
The research evaluates the performance of the CodeT5-Authorship model against seven traditional machine learning classifiers and eight fine-tuned transformer models such as BERT, RoBERTa, and Longformer. Key findings from their experiments indicate:
- Binary Classification: CodeT5-Authorship demonstrates an impressive 97.56% accuracy in distinguishing between closely related LLMs like GPT-4.1 and GPT-4o, underscoring the subtle but distinguishable stylistic signatures of these models.
- Multi-Class Attribution: The model also achieves 95.40% accuracy in attributing C code to one of five leading LLMs. This demonstrates the potential for detecting the unique styles of various models despite their similar training corpuses.
Methodological Insights
The paper highlights the efficacy of stylometric analysis in model-specific authorship attribution, particularly through:
- Stylometry and Machine Learning: The integration of lexical metrics, syntactic analysis, and comment density stands out as a critical factor in enhancing attribution accuracy.
- Comparison with Other Approaches: CodeT5-Authorship competes favorably with classical ML approaches and other transformer-based models. Its encoder-only setup offers a computational advantage without sacrificing precision, suggesting a promising direction for future model development.
Implications and Future Directions
The implications of this paper are significant for digital forensics, academic integrity, and software supply chain security. By pushing beyond human-vs-LLM detection to discerning specific model footprints, the research lays the groundwork for more sophisticated accountability mechanisms and tracing capabilities in AI-assisted software development environments.
Future research directions might explore:
- Cross-Language Attribution: Extending the methodology to other languages can further validate and refine techniques for cross-LLM attribution.
- Adversarial Robustness: Evaluating model resilience to code obfuscations or intentional tampering could enhance practical applicability.
- Scalability: Investigating the method’s performance and scalability in larger and more varied LLM landscapes offers potential for wider adoption and integration.
This research exemplifies a robust approach to addressing the emerging challenges and nuances of LLM-generated content, crucial for maintaining transparency and security in a rapidly evolving technological landscape.