Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
We conduct two separate experiments to study the social acceptance of AI ethical decision making. In the first experiment, we test whether there is an “unfounded” fear of technology. We contrast two methods to measure this fear: an indirect method that measures preferences implicitly and a direct method that measures preferences explicitly. Direct questions show that humans have an aversion to AI; however, indirect questions show that humans are not averse to the implementation of new technologies. We provide a theory to identify the cause of this discrepancy: in addition to their own preferences, subjects largely weight social preferences in direct questions. In the second experiment, we study how humans react to different ways of introducing this new technology to society and find that part of the fear of AI may be related to trust in one’s government. Our results show that although individuals do not have a bias against AI, its explicit discussion may generate antagonism.