- The paper introduces a qualitative threshold model for ascribing authorship, drawing on European copyright law as its main contribution.
- It combines doctrinal analysis with empirical insights to distinguish between cognitive support and problematic AI substitution in academic writing.
- The study emphasizes that meaningful human intellectual input is essential for authorship, promoting transparency and academic integrity.
Legal and Normative Perspectives on Authorship in GenAI-Aided Academic Work
Introduction
The proliferation of Generative AI (GenAI) systems in higher education has precipitated a reevaluation of foundational academic constructs, particularly those related to authorship, responsibility, and academic integrity. "Who is the author? A legal and normative view of authorship in Generative AI-aided academic works" (2604.04700) undertakes a rigorous analysis of these issues through the lens of European legal doctrine, primarily copyright law, and its intersection with supranational and institutional frameworks. The work introduces a qualitative threshold model for ascribing authorship in AI-mediated academic outputs, providing both doctrinal clarity and practical guidance.
Human-GenAI Interaction: From Cognitive Support to Substitution
The paper offers an incisive breakdown of student-GenAI interaction modes in academic writing, recognizing a spectrum from benign cognitive scaffolding to problematic cognitive substitution. Empirical studies are marshaled to demonstrate that GenAI, when used as an adjunct to human reasoning (e.g., brainstorming, organizing ideas), can enhance learning outcomes, provided student engagement is active and critical. However, the research emphasizes that when GenAI output supplants core intellectual tasks—reasoning, analysis, creative decision-making—there is a substantive erosion of educational and normative values underpinning academic authorship. This substitution manifests not only as pedagogical distortion but also as latent misrepresentation or plagiarism, eliding the actual intellectual contribution of the purported author.
The pivotal insight is that authorship is best understood as a threshold construct contingent on meaningful human intellectual contribution; it is neither binary nor mechanistically determined solely by the presence of AI assistance.
Doctrinal Framework: European Copyright Law and Authorship
The analysis draws extensively on evolving European copyright jurisprudence, particularly the requirement that protected works result from "human intellectual creation." Binding directives (Directive 2001/29/EC—InfoSoc, Directive 2019/790—DSM) and CJEU case law (Infopaq, Painer, Cofemel) are cited to anchor two core principles:
- Authorship must be attributable to a natural person who exercises creative freedom and makes intellectual choices.
- Outputs generated by AI lacking substantive human intellectual intervention are not eligible for authorship nor the attendant suite of rights.
The paper argues that in academic settings, where the assessment of work presupposes the student's reasoning and intellectual development, this requirement has heightened salience. The research advances a qualitative framework for authorship determination, operationalizing legal doctrine into assessment criteria: explainability, intellectual control, nature of AI output, substitutability, and whether the human role was limited to mere prompt formulation. Failure to meet any exclusion criterion—such as inability to defend the work or the submission being substantially identical to unmodified AI output—precludes attribution of authorship.
Alignment with European Regulatory Instruments
The doctrinal stance is bolstered by recent regulatory developments. The AI Act (Regulation (EU) 2024/1689) stresses human oversight, transparency, and the non-delegability of responsibility, aligning directly with the imperative that authorship remains a function of human agency. Similarly, the GDPR's principles of transparency and accountability reinforce requirements for disclosure and traceability regarding AI involvement in academic work—key for maintaining assessment fairness and trust.
The analysis highlights how academic institutions, though enjoying considerable autonomy, increasingly operationalize these regulatory principles through codes of conduct, mandatory disclosure policies, and explicit guidelines for GenAI use. At the suprainstitutional level, organizations such as EUA, ENAI, and ENQA codify expectations that reinforce the human-centered conception of authorship: transparency, attribution, and consistent oversight. Notably, simple self-disclosure of AI use is deemed insufficient; instead, granular reporting on the manner and extent of AI involvement is advocated.
Implications for Practice and Policy
The practical implications of the proposed framework are considerable:
- Assessment protocols must evolve to incorporate qualitative authorship thresholds, ensuring that GenAI tools are integrated as cognitive aids rather than autonomous surrogates.
- Institutional policy must emphasize detailed disclosure of AI contributions and require that students articulate and defend their intellectual input.
- There is a convergence toward multi-level governance, where binding EU regulations and sector-specific soft law shape a coherent, human-centric approach to AI in education.
From a theoretical perspective, this paradigm refines prevailing notions of academic integrity, situating them at the intersection of evolving technology and entrenched legal doctrine. The qualitative threshold model could serve as a template for broader global adaptation, particularly as jurisdictions confront the necessity of updating intellectual property frameworks for an era of pervasive AI assistance.
Prospects for Future Developments
The trajectory of GenAI adoption in higher education, coupled with continuing advances in EU regulatory harmonization, suggests several pathways for further evolution:
- Refinement and standardization of qualitative threshold models for authorship across disciplinary, national, and global boundaries.
- Deeper integration of AI literacy and academic integrity education into curricula, ensuring that students and staff can distinguish between acceptable support and prohibited substitution.
- Potential legislative updates explicitly addressing GenAI outputs and authorship in educational contexts, possibly drawing on principles set out here.
Conclusion
By systematically integrating European copyright jurisprudence, regulatory acts, and supranational governance with pedagogical objectives, the paper provides a robust normative architecture for addressing authorship in the GenAI era. The argument for qualitative, context-sensitive thresholds offers both legal clarity and pedagogical coherence. Sustaining the link between authorship, responsibility, and learning is presented not only as a legal imperative but as essential for the credibility and integrity of academic qualifications in an age of increasingly capable AI systems.