Optimal probabilistic forecasts: When do they work?

B-Tier
Journal: International Journal of Forecasting
Year: 2022
Volume: 38
Issue: 1
Pages: 384-406

Authors (5)

Martin, Gael M. (not in RePEc) Loaiza-Maya, Rubén (not in RePEc) Maneesoonthorn, Worapree (not in RePEc) Frazier, David T. (not in RePEc) Ramírez-Hassan, Andrés (not in RePEc)

Score contribution per author:

0.402 = (α=2.01 / 5 authors) × 1.0x B-tier

α: calibrated so average coauthorship-adjusted count equals average raw count

Abstract

Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are ‘optimal’ according to a given score and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility, however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios and using both artificially simulated and empirical data.

Technical Details

RePEc Handle
repec:eee:intfor:v:38:y:2022:i:1:p:384-406
Journal Field
Econometrics
Author Count
5
Added to Database
2026-01-25