Will AI help spread financial inclusion at a time when families are struggling to make ends meet? Or will it merely help organizations that are already rich and powerful – the world's banks and financial services providers – make even more money?
This was an important question for a recent Westminster Business Forum policy on AI in the Financial Services sector. Indeed, it went right to the heart of who benefits from the technology, and where value accrues in society – as we are assured it will.
Alex Waite is Partner at pensions and investments consultancy Lane Clark and Peacock, and Chair of the AI Committee of the Association of Consulting Actuaries. He told delegates:
My overarching theme is balancing the opportunities of AI with the risks. Clearly, there are efficiencies, but I think the real benefit of AI is achieving engagement with people, and the inclusion of them in financial matters, because many just don't understand finance.
A fair point. But is there a risk that banks might use the technology to push services aggressively to customers, some of whom may be vulnerable or have no need of them? After all, the sector has not always been a model of good behavior this century, having indulged in both the mis-selling of products and market rigging – the Libor and Forex scandals – not to mention flogging sub-prime mortgages and triggering the 2008-09 financial crash.
Waite seemed to acknowledge the point, saying:
There are real benefits to using AI, but those benefits should go to policy holders and members of pension funds. I would like to see regulation that says, where those benefits exist, organizations are required to use AI to seek them out for individuals.
Otherwise, the only reason that people use AI will be to make more money! And I think that's the wrong direction. We should have AI for public good and to get more benefits to policyholders and members. So, I'd like to see regulation encouraging action – rather than inaction, which I see too often.
The challenge of trust and 'prudent person' standards
Another issue is the question of public – and regulators' – trust. While, rightly or not, many people trust generative AIs, Large Language Models, and chatbots to give them critical data and insights, the technology's problem with distinguishing fact from fiction – including its own hallucinations – is well known.
A secondary issue is acknowledging data provenance: AI-enabled search seems designed to keep users within vendors' walled gardens, rather than refer people out to external information sources (see diginomica, passim). In such a context, can we trust AIs to give us accurate, reliably sourced, and verifiable information?
Waite raised an interesting point:
In pensions law in the UK, we use trust law rather than contract law, so we are required to act as a prudent person. But can an AI be that prudent person? It's a good question, and the interaction between an AI and a potential regulator, or The Pensions Regulator, is going to be hard to work out.
So, what can we do until we have clarity on whether an AI can act as a prudent person or not? We will certainly need that clarity in the future.
An excellent observation, and that clarity may be years away. On X over the weekend, for example, some users shared stories of asking chatbots to give them the origins of some completely fictitious phrases. Many AIs obliged and supplied plausible in-depth histories for ideas that were entirely made up by their prompters.
The conclusion that some chatbots are – clearly – unable to distinguish between fact and fiction is troubling, and this makes it hard to regard an AI as a 'prudent person' in law.
Which brings us to the thorny issue of bias: the problem of systemic inequities in historic data sets being automated and perpetuated by AIs. For example, if, over the years, business loans and other financial services have predominantly been granted to men, might an AI use that training data to deny services to women? And so on.
Waite's response was pragmatic – and refreshingly free of the hype or cultism that often characterizes these debates. He said:
There will always be bias in generative AI. You can't get rid of all the bias, but you can identify specific biases that you want to eliminate. But even then, you might have variables that are picked up by the AI and still bring through some level of bias.
So, you can't have legislation that says 'We will have zero bias', or 'We will not tolerate any bias'. All you can do is tone down those biases – you can't guarantee that you'll remove all of them, by the very nature of Gen-AI.
So, if you say, 'We will only tolerate zero bias', then you're really saying, 'We can't use Gen-AI to fill that solution'. So, I'm always balancing the opportunities with the risks. The opportunities are huge for society, but the risks do need to be managed.
Enterprise adoption - reality vs. hype
It is also essential to recognize that, despite the explosive growth of individuals (half a billion or more) using ChatGPT – often as shadow IT within the enterprise – most organizations are at the 'What's the ROI?' phase.
Put another way, 'enterprise adoption' can be a misnomer. The reality is often piecemeal usage within organizations in line-of-business departments.
Indeed, recent reports have suggested that more formal AI projects are often abandoned, sometimes over concerns about data provenance, governance, skills, measurable returns, bias, security, and other issues.
Alexis Rog is CEO of Financial Services customer verification platform, Sikoia. He acknowledged the scale of the challenge, saying:
For banks and insurers, there's no doubt that Gen-AI today is making inroads. But if we look beyond corporate ChatGPT accounts and chatbot upgrades, it's fair to say that most large financial institutions are still at the experimentation phase. Just looking for concrete announcements, they're not that easy to find in the UK.
And for me, when financial institutions claim they have, say, 200 possible use cases, it suggests a scatter-gun approach, spreading efforts too thin, rather than demonstrating a clear strategic direction for AI. So unsurprisingly, I don't see a big impact yet on larger financial institutions, on their P&L, let alone on changing industry dynamics.
This was a refreshing outbreak of candor from the leader of a company with skin in the game. So, it is wise to remember that the Financial Services industry has a core purpose: making money. And behind that, it is a heavily regulated sector with an urgent need to manage risk. As a result, rushing into a faddy, overhyped market is unwise.
Rog said:
The ROI for many initiatives can still be unclear. So, the short-term gains we see are often offset by factors like wage inflation or substantial ongoing tech expenses.
Plus, we know that in the longer term, the real value in the sector will lie beyond productivity gains – in new AI-enabled financial products that don't exist yet.
This was another useful point. Survey after survey over the past three years has revealed that enterprise AI adoption is driven, primarily, by competitive pressures, a perceived need to cut costs, and by a desire to find new productivity gains (see diginomica, passim). Few, if any, credible reports have found different results.
This tells us that, for whatever reason, making the organization smarter seems to be a minor concern for many decision-makers – at least, those who 'buy the hammer then look for the nail'. So, if Rog is right, moving beyond productivity metrics to some golden age of AI-enabled Financial Services would seem to demand a different mindset from the industry: a coherent strategic approach, not a tactical cost-cutting one.
He continued:
There is uneven adoption. My view is that larger players have the advantage here: more data, more money to attract talent, and more capital. Mid-tier and smaller financial institutions often rely on third-party providers or don't have the tech capability in house.
And another question is, if everybody in the sector invests billions, then where does the value accrue? In past tech cycles, early value tends to accrue for infrastructure providers.
On that point, he cited NVIDIA, whose market cap (as of 21 April) stands at $2.3 trillion, saying:
If we dig a bit deeper, I would highlight four major blocks to AI adoption in financial institutions. One is that the vast majority of IT budget goes on legacy systems, and AI is only as good as the data you feed into it, which demands serious investments and change.
The second is cultural: financial institutions are generally risk averse, and rightly so. And they're slow moving, which contrasts sharply with the AI advances we see happening weekly.
At the moment, there are also concerns about regulation, specifically around IP, consumer duty, and resilience. Then there's the obvious AI literacy gap. Just because a lot of people are using ChatGPT doesn't mean they understand how AI works. And even if a boardroom does understand AI, that doesn't mean the workforce does.
All fair points. So, was there any melody – a catchy hook among all the notes of caution and pragmatism? Rog said:
There are clear high-impact use cases that financial institution can double down on today. The first is that AI can significantly speed up and scale software development, which, if done in a secure and resilient way, for me represents arguably one of the most strategic levers for AI to help Financial Services transform itself.
An excellent point, though I would argue that one caveat is the uncertain provenance of any code that an LLM might recycle: was it originally proprietary? Was it obtained legitimately? Have alternative solutions been tested? And does the IT team understand the generated code?
But Rog continued:
The second is customer engagement, which is already being transformed. And there I would urge everybody to look at FinTechs like Klarna, which has very detailed information in its IPO prospectus, as well as on the impact that AI has on its operations.
And another is partly what we do as Sikoia: making sense of messy, siloed, and unstructured customer data. Again, this is one of the biggest inefficiencies in Financial Services. And behind this overarching trend, the enabler is better operational risk, and better risk management.
Rog explained that, behind the hype, it is critical to understand that AI adoption in Financial Services moves differently than in other sectors, and demands a "massive risk-killing effort".
In this regard, employment issues are no minor consideration, he warned:
If, suddenly, you can cut 300 people out of your call center, then you need a broader workforce transition plan for that.
So, recognize that the economic impact isn't all upside. AI-driven gains don't circulate in the economy like wages. They often concentrate wealth, and productivity growth won't automatically mean a fair distribution of benefits.
In this way, AI's true effect requires tracking down how the value is shared, not only within companies, but also within the workforce and among consumers.
My take
Hear, hear on all of Rog's points, and full marks too for Waite's earlier observations.
This is precisely the kind of critical thinking the AI industry needs, and that enterprise users should demand – not the absurd hype in which many of our policymakers now indulge. Leadership demands vision, yes, but also common sense.