Treating algorithms fairly
… evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.
As someone who makes a living from designing and implementing algorithms and presenting their results to people, this worries me (as it should any programmer!).
How can we beat this irrational bias against algorithms? The authors stop short:
Finally, our research has little to say about how best to reduce algorithm aversion among those who have seen the algorithm err. This is the next (and great) challenge for future research.
Note that demonstrating the superiority of an algorithm is not a solution, because people discount that and still prefer inferior results based on human judgment.
Some ideas off the top of my head:
- “Humanize” algorithms: highlight the people and teams that build them, and their dedication and hard work. Just like people associate “taxi” more with their human drivers rather than the mechanical cars they drive.
- Put humans between the algorithms and those consuming their output: Consider a core group of rationals who do not suffer from algorithm aversion, and share and forward the results of algorithms. This may not work for real-time lookups.
- Tie an algorithm’s fallibility to human error by the algorithm’s designer/implementer.
Any other ideas?