Assessing the forecast performance of models of choice

B-Tier
Journal: Journal of Behavioral and Experimental Economics
Year: 2018
Volume: 73
Issue: C
Pages: 86-92

Score contribution per author:

2.011 = (α=2.01 / 1 authors) × 1.0x B-tier

α: calibrated so average coauthorship-adjusted count equals average raw count

Abstract

We often want to predict human behavior. It is well-known that the model that fits in-sample data best is not necessarily the model that forecasts (i.e. predicts out-of-sample) best, but we lack guidance on how to select a model for the purpose of forecasting. We illustrate the general issues and methods with the case of Rank-Dependent Expected Utility versus Expected Utility, using laboratory data and simulations. We find that poor forecasting performance is a likely outcome for typical laboratory sample sizes due to over-fitting. Finally we derive a decision-theory-based rule for selecting the best model for forecasting depending on the sample size.

Technical Details

RePEc Handle
repec:eee:soceco:v:73:y:2018:i:c:p:86-92
Journal Field
Experimental
Author Count
1
Added to Database
2026-01-29