As AI systems increasingly take on instructional roles — providing feedback, guiding practice, evaluating work — a fundamental question emerges: does it matter to learners who they believe is on the other side? We built a research platform to test this, with results suggesting several answers:
- Users rate AI feedback as highly useful but invest significantly less effort in their work. Perceived human presence increases effort even with equivalent feedback content.
- Non-credible human attribution (lack of trust or perceived deception) produces worse outcomes (time and effort spent on work) than transparent AI labeling
- What people say helps them and what actually changes their behavior are not the same
We investigated this using a three-condition experiment (N=148) in which participants com- pleted a creative coding tutorial and received feedback generated by the same large language model, attributed to either an AI system (with instant or delayed delivery) or a human teaching assistant (with matched delayed delivery). This three-condition design separates the effect of source attribution from the con- found of delivery timing, which prior studies have not controlled. Source attribution and timing had distinct effects on different outcomes: participants who believed the human attribution spent more time on task than those receiving equivalently timed AI- attributed feedback (d=0.61, p=.013, uncorrected), while the delivery delay independently increased output complexity without affecting time measures.
An exploratory analysis revealed that 46% of participants in the human-attributed condition did not believe the attribution, and these participants showed worse outcomes than those receiving transparent AI feedback (code complexity d=0.77, p=.003; time on task d=0.70, p=.007). These findings suggest that believed human presence may carry moti- vational value, but that this value depends on credibility. For computing educators, transparent AI attribution may be the lower-risk default in contexts where human attribution would not be credible.