There is a wealth of evidence suggesting that supposedly colour blind algorithms, in reality, reflect some less than savoury human biases. In practical terms, this can result in outcomes that discriminate against ethnic minorities. However, this needn’t be the case.
We all know that A.I. is not 100 percent perfect. We see the imperfections every time Amazon recommends a book that isn’t quite right or Netflix suggests a romantic comedy to a hardcore fan of horror and action thrillers. And we probably even take comfort from the fact a mere algorithm can’t predict our life choices with absolute accuracy.
But there’s less comfort to be had when a combination of A.I. and machine learning results in a loan or job application being rejected. And frustration can turn into justifiable anger when the patterns of rejection seem to confirm a quantifiable racial bias.
In theory, A.I. should remove any scintilla of discrimination from decision-making processes. In the pre-digital world when decisions were made largely by individuals or committees, unconscious bias or very conscious racism undoubtedly led to negative outcomes for members of ethnic minority groups when, say, applying for jobs, asking for credit or attempting to secure leasehold accommodation. A.I., on the other hand, should be colour blind. It sifts information, applies analytics and ultimately makes a rational and accountable decision based on facts and facts alone. Equally important, as the facts change, machine learning tools adjust accordingly, ensuring that decision making continues to reflect societal expectations and changing circumstances.
That’s the theory. Sadly, there is evidence to suggest that the decisions made by machines often reflect the human biases that AI is designed to eradicate.
Reflecting Human Bias
To take an example, back in 2019, there was widespread criticism when an algorithm used by many US hospitals was much less likely to refer black patients to enhanced treatment programmes when compared to their white counterparts. Similarly, the increased reliance on A.I. in the consumer credit market plausibly plays a role in white applicants being more likely to be accepted for credit than black applicants.
A.I. as an enabler of facial recognition is creating a new set of potential problems. Facial recognition is a key technology, used not only by law enforcement agencies but also by employers as a biometric check to identify employees. Some algorithms are very poor at reading black faces and there have been examples of individuals losing gig economy jobs because they couldn’t be identified by facial recognition systems. In law enforcement, there may be problems with identities being confused.
Countering The Bias
So that’s the case for the prosecution. A.I. can be racist. But perhaps a more important question to ask is why this happens and what can be done to stop it.
One possible cause is an unconscious or conscious bias on the part of system designers who simply program in their own prejudices.
In reality, however, the cause of algorithm-driven discrimination is likely to be more complicated and subtle. Often the sample is to blame. Or to put it another way, the system is learning from incomplete or inaccurate information.
How can this happen? Well to take an example, the aforementioned discrimination in the US credit market is caused – at least in part – by a lack of information on underserved communities. While the banks had lots of data on white, middle-class customers, they had much less to go on in the case of ethnic minorities. This data deficit resulted in a bias against ethnic groups.
Many of us regardless of race or gender will have experienced a variation on this theme when first applying for credit. Without a previous history, getting the first loan can be difficult. However, the impact of this phenomenon is much more pronounced on ethnic minorities of all ages.
Sample data may be flawed rather than incomplete. For instance, an algorithm that looks at insurance risk based on postcodes may conclude that postcodes with large ethnic minorities represent a greater risk, with one factor being a perceived need for higher police numbers. All well and good, but what if the areas in question were traditionally more heavily policed because of misconceived assumptions made by senior officers based on their own biases rather than empirical evidence.
In theory, such biases should be eradicated by machine learning, with the algorithm updated as new evidence comes in. In practice, the impact of the training data can linger for many years.
Better by Design
But this doesn’t have to be the case.
A.I. and machine learning systems can be designed with inclusivity in mind. A good first step is to ensure that the sample data is balanced and fair, representing not only the best information about all social groups but also drawn from a representative geographical spread.
Testing is also vital. Before a system goes live, it should be tested against a representative cross-section of the population and the outcomes analysed. If a bias emerges, the algorithm should be recalibrated.
There are real dangers that A.I. will simply embed racism, gender inequality and other forms of unfairness. But that should not be the case. Properly designed it should empower much more equable decisions.
More insights are available within the Pathfinder community, join us here.