Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
Portfolio optimization often struggles in realistic out-of-sample contexts. We deconstruct this stylized fact by comparing historical forecasts of portfolio optimization inputs with subsequent out-of-sample values. We confirm that historical forecasts are imprecise guides of subsequent values, but we discover the resultant forecast errors are not entirely random. They have predictable patterns and can be partially reduced using their own history. Learning from past forecast errors to calibrate inputs (akin to empirical Bayesian learning) generates portfolio performance that reinforces the case for optimization. Furthermore, the portfolios achieve performance that meets expectations, a desirable yet elusive feature of optimization methods.