The Importance of Explainable AI
The workings that result in AI-driven decisions or predictions tend to be shrouded in mystery – even to system creators – but as the technology achieves ubiquity the requirement for explainability is growing
AI is already changing the way we live and work. Apply for a mortgage or an insurance policy and the chances are that AI will play a part in saying “yes or no” and determining the price you pay for the product. Walk through a shopping centre and you may well encounter facial recognition systems that rely on artificial intelligence to identify potential thieves, troublemakers or terrorists. Take part in a large medical trial and increasingly AI-driven pattern recognition will be deployed to accelerate progress towards a definitive result.
The common factor is trust. We expect AI and Machine Learning solutions to produce accurate and fair decisions and predictions. We don’t want to have a mortgage application turned down for no explicable reason, nor do we want to be stopped and questioned in the street because of a spurious match on a facial recognition system. And when a pharmaceutical company announces that a trial of a new wonder drug has been promising, we want to be sure the associated analytics, models and predictions can be trusted.
So from a corporate perspective, it is vital that any decisions driven by AI tools are fair, free from bias and transparent. Without this kind of assurance, it will become increasingly difficult to maintain the trust of customers and employees.
That’s why we’re going to be hearing a lot more about the concept of Explainable AI.
Taking Away The Mystery
The decisions and predictions made by AI and Machine Learning-driven tools are shrouded in mystery, even to those who have created the technology. At one end of the chain, you have the information used by the system and at the other, a set of outputs. In between there is a metaphorical black – and very opaque – box in which the data is processed. In many cases it is impossible to work backwards from the output and understand why certain decisions or predictions have been made. So, for the most part, there is little or no oversight of the processes taking place within that box.
Explainable AI – otherwise known as XAI – aims to create a methodology to make the automated decision-making process much more transparent.
And this is important. As AI becomes increasingly ubiquitous across functions ranging from supply chain management to hyper-personalised marketing and automated customer service, organisations must not only understand the technology they are using but also have the facility to control and adjust the underlying algorithms. Unless the processes are transparent, businesses may struggle to offer the best possible levels of service and outcomes for customers. In addition, organisations that can’t explain or justify decisions made by their AI tools may be exposed to reputational or regulatory risks.
The How and Why
So what does transparency actually mean? At its simplest, organisations need to know why their AI tools are making decisions, what criteria were applied and whether the decisions were good ones in comparison to the alternatives. Equally important, businesses should also be able to monitor and correct any errors made by the system.
It’s early days, but explainable AI methodologies are being developed and honed. Broadly speaking there are three techniques in play, namely: Prediction Accuracy; Traceability; and Decision Understanding.
Prediction accuracy involves a comparison of the training data models used by the organisation and the decisions made by AI tools across a series of simulations. The comparison allows the organisation to assess the accuracy of decisions and predictions, allowing improvements to be made.
Traceability methodologies provide a different set of insights, essentially by allowing organisations to monitor the AI and ML processes. This enables engineers to understand why certain decisions have been made and make changes to algorithms if a problem is identified.
This feeds through to the “Decision Understanding” component. Essentially this is all about educating the people who work day-to-day with the AI system and explaining why decisions are being made the way they are.
Currently, explainable AI has its limitations. For instance, explaining an event or problem that has recently occurred may not be possible if there is no precedent and thus no useful data. The problem here is that any explanation or forecast that is produced is likely to be inaccurate.
There are also concerns about costs. As AI and machine learning evolve, the systems are becoming more complex. This in turn pushes up the computational costs required to introduce explainable AI. In practical terms, if a decisioning system is based on a relatively simple tree-based model, the costs of XAI could be low. But once organisations begin to use neural networks the greater complexity will push up budgets.
But the direction of travel is clear. In sectors such as healthcare and finance, AI systems are increasingly making decisions that have a profound impact on millions of lives. It is therefore essential that any decision making is fair, transparent and free of bias.
Explainable AI will help companies to optimise performance and decision-making, retain control over safety, retain trust and comply with regulations. As such as XAI is key to the ongoing artificial intelligence revolution.