Effects of suspected AI authorship of human-attributed feedback

Determine the consequences for learner behavior and experience in computing education when learners suspect that feedback attributed to a human teaching assistant or instructor was actually generated by an artificial intelligence system.

Background

The paper investigates whether learners behave differently when feedback is attributed to an AI system versus a human, carefully separating source attribution from delivery timing. Beyond this comparison, the authors highlight a growing real-world scenario: instructors increasingly use LLMs to draft feedback that is then delivered under their own name, raising the possibility that learners may suspect human-attributed feedback is actually AI-generated.

The authors note that while prior work has examined perceptions of AI vs. human feedback, no prior studies had examined how the credibility of the attributed source influences effects on learners. They explicitly state that what happens when learners suspect a human label is not genuine is unknown, motivating their experimental design and the exploratory credibility analysis.

References

In both cases, learners may suspect that ostensibly human feedback was actually generated by AI, and what happens when that suspicion arises is unknown.