Agentic AI and hallucinations

C-Tier
Journal: Economics Letters
Year: 2025
Volume: 255
Issue: C

Authors (2)

Iyidogan, Engin (not in RePEc) Ozkes, Ali I. (SKEMA Business School)

Score contribution per author:

0.503 = (α=2.01 / 2 authors) × 0.5x C-tier

α: calibrated so average coauthorship-adjusted count equals average raw count

Abstract

We model a competitive market where AI agents buy answers from upstream generative models and resell them to users who differ in how much they value accuracy and in how much they fear hallucinations. Agents can privately exert effort for costly verification to lower hallucination risks. Since interactions halt in the event of a hallucination, the threat of losing future rents disciplines effort. A unique reputational equilibrium exists under nontrivial discounting. The equilibrium effort, and thus the price, increases with the share of users who have high accuracy concerns, implying that hallucination-sensitive sectors, such as law and medicine, endogenously lead to more serious verification efforts in agentic AI markets.

Technical Details

RePEc Handle
repec:eee:ecolet:v:255:y:2025:i:c:s016517652500357x
Journal Field
General
Author Count
2
Added to Database
2026-01-26