Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
The Shapley–Folkman theorem places a scalar upper bound on the distance between a sum of non-convex sets and its convex hull. We observe that some information is lost when a vector is converted to a scalar to generate this bound and propose a simple normalization of the underlying space which mitigates this loss of information. As an example, we apply this result to the Anderson (1978) core convergence theorem, and demonstrate how our normalization leads to an intuitive, unitless upper bound on the discrepancy between an arbitrary core allocation and the corresponding competitive equilibrium allocation.