Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
We model inter-temporal ambiguity as the scenario in which a Bayesian learner holds more than one prior distribution over a set of models and provide sufficient conditions for ambiguity to fade away because of learning. Our conditions apply to most learning environments: iid and non-iid model-classes, well-specified and misspecified model-classes/prior support pairs. We show that ambiguity fades away if the empirical evidence supports a set of models with identical predictions, a condition much weaker than learning the truth.