Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
The paper proposes a randomness-type test for comparing the validity of different measures of economic uncertainty. The test verifies the randomness hypothesis for the match between the jumps of an uncertainty index and the dates of uncertainty-generating events named by the panel of experts or artificial intelligence through large language models (LLMs) capable of generating human-like text. The test can also be applied to verify whether LLMs provide a reliable selection of uncertainty-generating events. It was initially used to evaluate the quality of three uncertainty indices for Poland and then applied to six uncertainty indices for the US using monthly data from January 2004 to March 2021 for both countries. The results show that LLMs provide a reasonable alternative for testing when panels of experts are not available.