Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
A longstanding finding in the forecasting literature is that averaging the forecasts from a range of models often improves upon forecasts based on a single model, with equal weight averaging working particularly well. This paper analyzes the effects of trimming the set of models prior to averaging. We compare different trimming schemes and propose a new approach based on Model Confidence Sets that takes into account the statistical significance of the out-of-sample forecasting performance. In an empirical application to the forecasting of U.S. macroeconomic indicators, we find significant gains in out-of-sample forecast accuracy from using the proposed trimming method.