"The Trust Gap"

AI-mediated video communication reduces trust without reducing accuracy.

Two preregistered experiments with 2,000 participants tested how AI modifications to video — retouching, background replacement, avatars — affect interpersonal judgment. Participants who knew their communication partner's video was AI-enhanced reported lower confidence in their assessments and lower perceived trustworthiness. The subjective experience of trust degraded measurably.

But their actual ability to detect deception did not change. Participants watching AI-mediated video were just as accurate at distinguishing truth from lies as those watching unmodified video. They did not unfairly suspect AI-tool users of lying. The signal remained intact while the confidence in reading the signal collapsed.

This is a dissociation between calibration and performance. The AI mediation introduces uncertainty about what is real in the visual channel — is that smile genuine or enhanced? Is that background their actual office? — and this uncertainty propagates into the trust assessment without propagating into the judgment itself. The perceptual system still extracts the relevant cues for deception detection. The metacognitive system, tracking the reliability of those cues, correctly identifies that the channel has been tampered with and downgrades confidence accordingly.

The practical implication is uncomfortable: the trust reduction is rational. If you know the visual channel is being manipulated by a system whose fidelity you cannot verify, reducing trust in that channel is the correct Bayesian update. The accuracy preservation is the surprise — it suggests the deception cues live in channels that current AI mediation tools do not yet modify.