Data Science Talent Logo
Call Now

AI the World Can Trust ByJames Duez

 width=James Duez is the CEO and co-founder of Rainbird.AI, a decision intelligence business focused on the automation of complex human decision making. James has over 30 years’ experience building and investing in technology companies. He’s worked with Global 250 organisations and state departments, and is one of Grant Thornton’s ‘Faces of a Vibrant Economy’. James is also a member of the NextMed faculty and the Forbes Technology Council.
In this post, James considers the limitations of generative AI for business decision making. Currently, GenAI and LLM decisions require human intervention, because they lack accountability and transparency. James explains that the solution lies in neurosymbolic AI, a novel methodology which combines the analytical power of ML with the clarity of human reasoning:

DATA SCIENCE AND DECISION INTELLIGENCE WITH NEUROSYMBOLIC AI

The evolution of artificial intelligence (AI) is a story of remarkable breakthroughs followed by challenging realisations and it dates back to the dawn of computing. AI has gone through a series of ‘summer’ and ‘winter’ cycles for decades, driven by hype, and followed by troughs of failed deployments and disillusionment.

And here we are again, this time marvelling at generative AI and the capabilities of large language models (LLMs) like GPT-4. Data leaders are continuing to figure out how to leverage LLMs safely, to discover where the value actually is and what may come next.

Of course, AI was never ‘one thing’, but rather a broad church of capabilities, all at the forefront of innovation, trying to add value. Since the term AI was coined in 1955 by John McCarthy, we have suffered from the AI Effect. This is a phenomenon where once an AI capability starts to deliver value, it often ceases to be recognised as intelligent, or as true AI. This shift in perception occurs because the once-novel feats of technology become integrated into the fabric of everyday technology, leading people to recalibrate what they consider to be AI, often in search of the next groundbreaking advancement that eludes current capabilities.

As you work in data, you’ll know that it’s your job to use AI technologies to drive value and to do so responsibly, ensuring that AI models provide reliable outcomes as well as comply with potential upcoming regulations regarding transparency, explainability, and bias.

So how can you cut through the current market hype to locate the true business value?

Like the generations of data-centric AI before it, we are becoming acutely aware of the challenges this latest chapter of generative AI presents. The power to generate content is no longer in question, but the real test, once again, is to focus on the business outcomes we are trying to achieve.

Are we looking to generative AI to enhance the user experience of experts, or to tackle decisioning? And if the latter, can we trust the decisions it might make?

People forget their AI history.

Back in the 1980s and 90s, the structured, rule-based world of symbolic AI was all the rage. Back then, AI wasn’t even about data but instead was focussed on solving problems by representing knowledge in models that could be reasoned over by inference engines. We called it symbolic AI because it focused on the manipulation of symbols –discrete labels that represent ideas, concepts, or objects, which when combined with logic, could perform tasks that required intelligent behaviour. This approach to AI was based on the premise that human thought and cognition could be represented through the use of symbols and rules for manipulating them.

Symbolic AI’s golden age had governments investing billions in AI, much as we see in the hype bubble we have around generative AI today. But, in time, symbolic AI gave way to sub-symbolic AI, later known as machine learning (ML), due to the latter’s ability to learn directly from data without the need for explicit programming of rules and logic. In fact the pendulum swung so comprehensively, many forgot about symbolic AI and it retreated back to academia.

ML has revolutionised numerous fields, from autonomous vehicles to personalised medicine, but its Achilles’ heel has always been the opacity of its prediction-making. The ‘black box’ nature of ML models has long been deemed unacceptable and unreliable when it comes to making decisions of consequence, especially those that are regulated, such as in financial services and healthcare. We value ML for its ability to make predictions based on the past, but it is not in itself a decision-making technology. It cannot be left unchecked.

Today, we are living in a world that increasingly demands transparency and accountability, and a focus on outcomes and their consequences.

As I write this, I am aware there are many data scientists reading who may proclaim that the ‘black box’ problem is intrinsically solved and that they understand their ML models. There are many methods for statistical understanding of how ML models work, but none of them comprise a description of the chain of causation that leads to a decision. Those who are responsible for such business decisions need to understand why and how individual outcomes are achieved, if they are to trust AI. That trust remains lacking.

As a society, we have a long history of being harmed by technology that we didn’t understand. It is no surprise then that despite the massive value we have extracted from data science and ML, regulators have increasingly felt the need to govern it due to this trust gap. We have to accept that what ML produces is a prediction, not a judgement, and therefore a degree of human intervention and oversight is required downstream from an ML output to bridge the gap between ‘insight’ and our ability to take ‘action’.

We have to accept that what ML produces is a prediction, not a judgement, and therefore a degree of human intervention and oversight is required downstream from an ML output to bridge the gap between ‘insight’ and our ability to take ‘action’.

In parallel to this evolution of AI has been the rise of the automation agenda. This largely separate function is obsessed with efficiency, and has its roots in linear process automation like robotic process automation (RPA) and other rule-based workflow tools. Process automation has been focused on reducing the cost of, and reliance on, human labour. But increasingly touches on the organisational desire to digitise products and services to meet evolving human demands for 24/7 multi-channel experiences.

The majority of the technology used by automation teams has historically been rules-based and linear, although the last decade has seen increasing attempts to leverage AI tools to achieve intelligent document processing (IDP) and other more complex tasks. But, despite these efforts, process people have also struggled with a gap, with their toolkits falling short of being able to automate the more complex, human-centric and contextual decision-making that should naturally follow the automation of simple and repetitive tasks.

It’s like everyone on ‘planet data’ is looking for ways of leveraging AI to automate more complex human decisionmaking, and everyone on ‘planet process’ is doing the same.

JAMES DUEZ – RAINBIRD

So does this new summer of generative AI and LLMs close this gap? They have certainly pushed the boundaries of text generation and natural language understanding, but are they the answer to decisioning ?

Unfortunately, the excitement around LLMs remains tempered by this same critical undercurrent of concern that exists around all other ML – that outputs generated by them still lack the transparency and accountability required for us to fully trust them. LLMs are enhancing the user experience and efficiency of experts, but due to their statistical nature, we cannot delegate decision-making to them unchecked.

[AI decisioning] represents a paradigm shift and is an absolute superpower for anyone who adopts it.

So what IS the answer?

Fortunately, a new field has emerged, that of decision intelligence (DI), founded by Dr Lorien Pratt. There are many labels for this evolving field. For example, Forrester refers to it as ‘AI decisioning.’ Whatever the label, it represents a paradigm shift and is an absolute superpower for anyone who adopts it.

Gartner defines decision intelligence as: ‘ a practical discipline used to improve decision-making by explicitly understanding and engineering how decisions are made and how outcomes are evaluated, managed and improved by feedback ’. They already recognise it as being of the same transformational potential in the same timescale as the whole of generative AI, a $10bn market as of 2022 growing at a tremendous rate.

But what is it?

Data science has had decades of being a technology looking for problems to solve. DI starts with a focus on the decisions we are trying to make, and then works backwards from there to the tools we might use to make them. It leverages a hybrid approach, combining a number of technologies to achieve the desired business outcomes.

That sounds terribly simple, but when we take this ‘outcome first’ approach, we find that the tools not only lie in data but in knowledge. DI encourages us to combine the analytical power of ML with the clarity and transparency of human-like reasoning that is synonymous with symbolic AI.

This has led to the development of neurosymbolic AI methodologies and has explainability and trust baked into the heart of both the technology and methodology. One approach is to leverage and extend knowledge graphs – nonlinear representations of knowledge – over which a very fast symbolic reasoning engine can reason to answer queries.

Unlike most AI, this neurosymbolic approach is not making a statistical prediction over historic data, it is reasoning over knowledge and data, juggling all the necessary probabilities that are synonymous with the real world, to automate complex decisions in a transparent and explainable way.

A neurosymbolic approach, working as a composite of machine learning and symbolic AI, allows us to address use cases featuring a greater degree of decision complexity. It powers solutions for some big organisations like Deloitte, EY and BDO.

Decision complexity measures the chain of events following a decision, along with the number of external factors that influence the decision outcomes in combination with the action related to the decision. By way of example, a low-decision complexity use case might be the choice to show a particular advertisement on a platform like Google or Facebook. The low complexity derives from the platform’s goal to create a simple behaviour; for the user to click on the ad and then possibly buy the product.

In contrast, a high-complexity decision might involve choices regarding a tax policy created by a government: there are complex implications of such a decision, which ripple through society and into the future. Decisions like these require a high degree of human expertise, which is not captured in data sets.

The world is attracted to LLMs because we all like the idea that they can process unstructured inputs, and provide us with natural language answers. But most have now realised their limitations.

LLMs cannot reason, they are ‘stochastic parrots’, designed to produce the mean best output. We must look at LLMs as prediction machines, well suited to creating predicted drafts of content to aid experts, but not capable of the real judgement required to make decisions of consequence in high-complexity domains.

Even with the utilisation of fine-tuning or techniques like retrieval augmented generation (RAG), there is a high degree of risk of hallucination (generative AI’s term for outputting errors). When you combine this with its inability to provide a chain of reasoning, you can see why powering complex decisioning is not on the cards. Like all ML, generative AI techniques remain the extension of search and are not a proxy for reasoning or decision-making.

However, it transpires that LLMs represent a powerful missing jigsaw piece in a different paradigm. Their birth now enables graph models to be built programmatically, turning other structured forms of knowledge into transparent knowledge graphs.

LLMs are phenomenally good at understanding language, and as such are extremely well-suited to extracting knowledge graphs from documents. Properly tuned and prompted, you can use an LLM to process regulations, policies, or a standard operating procedure and have it return a well-formed knowledge graph that represents the expertise contained within this documentation.

It’s even possible to extract weights and logic from documentation via LLMs. By doing so, you are able to turn unstructured documentation into structured, symbolic models that can be used, alongside the LLM, for reasoning. Critically this sort of reasoning takes place in the symbolic domain, with no dependency on the LLM or any other ML model.

This guarantees explainability and transparency.

Symbolic AI has long suffered from the closed world assumption – that they only know what has been explicitly encoded in them, potentially limiting their adoption to discrete, well-defined narrow domains.

Thanks to LLMs, we can also tackle this problem and start to open up this closed world. An LLM is able to consider unstructured inputs in the context of a knowledge graph. Those inputs could be a natural language query, or any form of unstructured data. Because the LLM has the graph as context, it is able to make inferences over the data that should be extracted, even where the language use has not been explicitly defined in the graph. In effect, an LLM can find semantic similarity within degrees of certainty and inject them into the knowledge graph.

Of course, this only works if you have a powerful reasoning engine that is capable of processing such queries.

By way of example, imagine an insurance policyholder asking questions about whether they are covered by their insurance policy after an incident. They may provide descriptions of what happened using natural language.

Using an LLM alone is not sufficient, as it cannot provide a reasoned answer that we can be confident makes sense in the context of the policy.

LLMs are phenomenally good at understanding language, and as such are extremely well-suited to extracting knowledge graphs from documents.

However, if we extract a knowledge graph from the policy documentation, we are able to reason about the policyholder’s cover over this formal model. By leveraging an LLM, we can accept unstructured input into the symbolic domain, and produce a trustworthy, evidencebacked decision that we can be sure correctly adheres to the terms of the policy.

This methodology delivers the best of all worlds. It enables organisations to merge the world of probabilistic modelling that is synonymous with two decades of ML with what at its heart is a symbolic technology – knowledge graphs. It can handle natural language inputs – and generate natural language outputs – with a symbolic and evidence-based core.

This AI configuration is neurosymbolic , leveraging knowledge graphs, symbolic reasoning, LLMs and neural networks in a hybrid configuration.

What’s particularly exciting about neurosymbolic AI is it breaks what has become a saturated mindset that all AI must start with data. The approaches inherited from the digital native businesses like Apple, Google and Meta – who have succeeded in automating advertising decisions using massive amounts of rich data – have led to a false assumption that this translates to other sectors, where more complex decisions are being made with poorer data. That mindset led everyone to believe that all AI must start with data – a problem that decision intelligence has now resolved.

Looking forward, trust is now going to be the bedrock of AI adoption. As regulatory frameworks like the EU’s AI Act take shape, the imperative for explainable, unbiased AI becomes clear. Neurosymbolic AI, with its transparent reasoning, is well-positioned to meet these demands, offering a form of AI that regulators and the public can accept.

As we move from a data-centric to a decision-centric AI mindset, the promise of AI that the world can trust becomes tangible. For data scientists and business leaders alike, the call to action is clear: embrace the decision intelligence appro ach and unlock the full potential of AI, not just as a tool for analysis, but as a partner in consequential decision-making that earns the trust of those it serves.

As we move from a data-centric to a decisioncentric AI mindset, the promise of AI that the world can trust becomes tangible.

Back to blogs
Share this:
© Data Science Talent Ltd, 2024. All Rights Reserved.