Artificial intelligence, ethics, and intergenerational responsibility

B-Tier
Journal: Journal of Economic Behavior and Organization
Year: 2022
Volume: 203
Issue: C
Pages: 284-317

Authors (3)

Klockmann, Victor (not in RePEc) von Schenk, Alicia (not in RePEc) Villeval, Marie Claire (Institute of Labor Economics (...)

Score contribution per author:

0.670 = (α=2.01 / 3 authors) × 1.0x B-tier

α: calibrated so average coauthorship-adjusted count equals average raw count

Abstract

In the future, artificially intelligent algorithms will make more and more decisions on behalf of humans that involve humans’ social preferences. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the payoffs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training increases the share of egalitarian decisions in the present.

Technical Details

RePEc Handle
repec:eee:jeborg:v:203:y:2022:i:c:p:284-317
Journal Field
Theory
Author Count
3
Added to Database
2026-01-29