As data that captures nearly all aspects of daily life becomes more readily available, and advanced technologies such as Artificial Intelligence and Machine Learning are going from strength to strength, the automation of many business tasks that were previously performed by humans has now become possible for many businesses and public institutions. The algorithms developed in these areas can now help evaluate loan applications, predict a defendant's likelihood of re-offending, or help clinicians determine what type of brain cancer their patient might be suffering from.
Apart from the obvious potential cost savings and the prospect of relieving employees from some of their more mundane daily tasks, many organisations hope that using such algorithms will allow them to overcome the biases inherent to human decision making, which are well-documented in areas such as credit scoring, criminal justice, and recruitment. However, there is a growing concern that algorithms themselves may result in biased outcomes and recommendations, either because the data used to train them may reflect historical biases, or because they may detect patterns that we would consider discriminatory, for example by associating low income with higher crime rates, which might lead to biased outcomes for certain ethnic groups that earn less.
The Open Innovation Team commissioned me, along with a group of academics, to conduct a landscape summary of bias in algorithmic decision-making. The purpose of this was to contribute to the evidence base of the Centre for Data Ethics and Innovation's review into bias in algorithmic decision-making. This review revealed a complex picture of research and public debate, with the following key findings:
- There are many different (and, often, mutually incompatible) interpretations of algorithmic fairness, and we must consider which one is most appropriate in each context. Society has not been able to reconcile different views on this, and we cannot expect machines, no matter how "smart" they are, to do this for us.
- Current legislation covers some manifestations of algorithmic bias, but is lacking with regard to others. Even where legal frameworks are sufficient, the law can only tell us what is "legal", not necessarily what should be considered "fair". Furthermore, data protection and privacy law may hinder attempts to mitigate bias effectively in some cases.
- The consequences of algorithmic bias vary considerably across different application contexts. Sometimes this is obvious, for example when we compare sending somebody to prison as opposed to declining an increase in their credit card limit, but sometimes there may be more subtle discrimination, e.g. if the chances of an algorithm rating a CV highly decrease by only a small percentage, yet for millions of female applicants.
A recurring theme in our analysis of the literature is that concrete empirical evidence regarding the use of algorithms in the target sectors, the precise nature of these algorithms, and the prevalence of algorithmic bias is currently insufficient. We also observe a number of efforts by various organisations to establish frameworks and standards that help mitigate against algorithmic bias, but also a lack of clarity regarding the relationships between them and how they might be deployed and enforced in the future. This shows that in order to address these questions deeper collaboration between scientists, business leaders, policy makers and the public will be needed, and the complexities of this problem need to be communicated in a clear and nuanced manner to the general public.
Michael Rovatos is Professor of Artificial Intelligence at the University of Edinburgh, where he also heads up the Bayes Centre for Data Science and AI.