Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
We develop new quasi-experimental tools to understand algorithmic discrimination and build nondiscriminatory algorithms when the outcome of interest is only selectively observed. We first show that algorithmic discrimination arises when the available algorithmic inputs are systematically different for individuals with the same objective potential outcomes. We then show how algorithmic discrimination can be eliminated by measuring and purging these conditional disparities. Leveraging the quasi-random assignment of bail judges in New York City, we find that our new algorithms not only eliminate algorithmic discrimination but also generate more accurate predictions by correcting for the selective observability of misconduct outcomes.