Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
A multi-stage persuasion-forecasting tournament asked specialists and generalists (“superforecasters”) to explain their probability judgments of short- and long-run existential threats to humanity. Specialists were more pessimistic, especially on long-run threats posed by artificial intelligence (AI). Despite incentives to share their best arguments during four months of discussion, neither side materially moved the other’s views. This would be puzzling if participants were Bayesian agents methodically sifting through elusive clues about distant futures but it is less puzzling if participants were boundedly rational agents searching for confirmatory evidence as the risks of embarrassing accuracy feedback receded. Consistent with the latter mechanism, strong AI-risk proponents made particularly extreme long- but not short-range forecasts and over-estimated the long-range AI-risk forecasts of others. We stress the potential of these methods to inform high-stakes debates, but we acknowledge limits on what even skilled forecasters can achieve in anticipating rare or unprecedented events.