Reflections on the BOE/FCA Artificial Intelligence Public Private Forum Report

28th February 2022 Adam George

The Bank of England (BOE) and Financial Conduct Authority (FCA) have recently published a joint report into artificial intelligence and its application in the financial services industry.

As someone with a background in computer science and software engineering, I am intrigued to see what's in this report and whether it chimes with what we're trying to do at PowerPlanner: embrace technology to improve regulatory compliance and give better financial advice more efficiently.

What is Artificial Intelligence?

That depends on whom you ask.

Anyone involved in marketing will probably tell you that their technology stack is "AI" when in fact it is just a database against which simple lookup queries are run, and salespeople will likewise claim their solution is "AI" and capable of "machine learning" even if it just processes some data and deterministically produces output based on these data and some other variables.

However, if you ask a tech person, they'll (hopefully) tell you something along the same lines of the ISO standard definition:

"Artificial Intelligence is an interdisciplinary field, usually regarded as a branch of computer science, dealing with models and systems for the performance of functions generally associated with human intelligence, such as reasoning and learning."

Thankfully, BOE and FCA have gone with the more scientific definition in their report; no buzzword bingo here!

What is in the Report?

It's a pretty long and detailed document (>40 pages of dense text) so here's a summary for those who've not read it:

  • The BOE & FCA recognise the need to understand the technology and the risks/rewards it can bring, so the Artificial Intelligence Public-Private Forum (AIPPF) was started in October 2020 to further the dialogue between the public sector, the private sector, and academia on AI.
  • Firms are already using AI to improve operational efficiency, which helps to reduce processing time and costs as clients demand faster and more streamlined and robust services. The BOE & FCA feel duty-bound to aid its safe adoption, in light of its inevitable rise in popularity.
  • The key barriers to entry are the costs and finding the expertise required to identify use cases and utilise the data at a firm's disposal. Ensuring that those responsible for the use of AI in a firm have the relevant skills and knowledge is a major challenge.
  • The risk of trying to use large data sets should not be underestimated. Adding more complexity and more parameters means the outcomes are harder to validate at a granular level, which can lead to biased decisions and unexpected outcomes. Any bugs in the AI algorithms could lead to regulatory breaches or financial loss.
  • Exploring financial scenarios using AI can amplify existing challenges, as opacity of the AI "magic" can cover up flaws in the models, bugs in the software etc. If AI systems are too complex and opaque then engaging responsibly with them becomes more difficult and the risks become greater, so firms should focus on clear communication and "explainability" to mitigate these risks.
  • Increasing regulation to drive up data standards and ensure compliant processing/aggregation of data may be needed, but the report is keen to stress that regulation should not stifle innovation.
  • Firms adopting AI should have an understanding of the data they're using, where it's come from and what it represents. This is crucial as AI models can often extract hidden patterns that can inadvertently accentuate existing biases. Firms that have historically valued their data and kept detailed logs will likely be at an advantage here, although not all data sets lend themselves particularly well to AI.
  • Firms need to consider the wider impact of models outside the initial field of application or business area, including on markets and consumers.
  • AI algorithm auditing is presented as a potential option for ensuring governance, ethics and compliance.
  • While the precise approach to be taken is still up for debate, governance frameworks and processes should be aligned with the risk and materiality of each AI use case, as more complicated cases involving more data and more significant outcomes will need greater levels of due diligence and require more time and resources. Governance frameworks should also seek to deliver a safe environment for testing AI models.
  • The concept of Reasonable Steps is a key concept in the FCA Code of Conduct and could be extended to the use of AI. Although the report suggests various ways this could be interpreted it stops short of making any recommendations as to the preferred governance approach.
  • An industry consortium could serve as a next step towards developing standards and common solutions for AI in financial services. Regulators and industry practitioners should continue to monitor and support the safe adoption of AI in financial services, and use public-private engagement to elicit feedback and improve.

Where I Agree with the Report

I actually agree with the vast majority of the report and I think it raises some important issues that anyone adopting AI should consider. I think it's right that AI begins with, and is all about, data. The availability of data and systems' abilities to collect and process lots of it - particularly in areas like natural language processing (NLP) and image recognition - means AI is more feasible than ever before.

I also agree that the lack of an industry-wide consensus on data standards in financial services makes it harder to innovate and collaborate. Firms can define their own standards separately but if they're orthogonal to each other's then collaboration on AI is not feasible, and general AI innovation would be compromised as a result. Initiatives like Open Banking will help, but there's a long way to go.

Regarding governance, I concur that clear lines of accountability are crucial for AI model monitoring and change management. Of course, knowledge is required for this. Firms should ensure that there is an appropriate level of understanding and awareness of AI’s benefits and risks throughout the organisation, and should be able to demonstrate why they are using an AI application in favour of something simpler and easier-to-understand that produces similar outputs. After all, with these things, there's often a temptation to jump on the proverbial bandwagon just to be seen to be using the latest tech, but, surprisingly often, the latest thing is not the right tool for the job at hand.

Underpinning all of this is the need for clear communication and common understanding. People must know what their AI does and does not do, and not make assumptions or blindly trust the algorithms, especially in the early days of adoption. Such complacency could easily lead to bad decisions being made and get a firm into serious trouble with the regulator.

Where I Disagree with the Report

Maybe it's just me, but I get the distinct impression from the report that there is a whiff of fear about AI coming from the regulators. I believe this comes from the perceived risks of "black box" AI systems, and is linked to the idea of "explainability" of a model and, in turn, how it can and should be regulated. There's also an apparent fear over autonomous decision making creating issues of accountability.

I'm not sure the regulator needs to worry about the low levels of the decision making process, though. I think the regulator should abstract away from how decisions are made and continue to regulate the validity of a decision and not how a firm may arrive at it.

After all, advice is ultimately either good or bad regardless of whether it's been arrived at by human intelligence or artificial intelligence. Any financial advice firms adopting AI need to ensure compliance of their outputs irrespective of the extent to which AI is the driving force behind those decisions.

It's not dissimilar from outsourcing in my view. The IFA is ultimately responsible for the advice even if the paraplanning is outsourced, and so if some decision making is effectively outsourced to an AI system, this simple fact remains. I therefore don't believe regulatory bodies need to concern themselves too much with the technology that underpins the decisions of the firms they hold to account.

Another point I'd challenge would be the one made about transparency being a security risk. The report contains an assertion that "AI systems can be quite complex and some of the elements, though not necessarily the AI component itself, may be compromised and open to cyber-attack if too much detail of the inner workings are disclosed."

This is quite a generic statement, but it's important to note that transparency does not necessarily lead to vulnerability. Indeed, in IT, the opposite is usually true.

The software world is used to having public source code and much of this public code is used in production in banks, government systems, manufacturing software etc. already. This has been the case for decades. Yet, many open source software products are considered amongst the best-maintained and most secure in their field.

Take, for example, web servers. By far the most popular on the net are Apache and nginx, both of which have completely public source code. Part of the reason for their popularity comes from the fact that, because the code is public, anyone can spot bugs or security issues in the code and submit a bug report or patch. This means issues get spotted and fixed sooner than in closed-source projects, where the development team is limited and more constrained compared with open-source projects that have a large community of contributors.

I think the same can be true for AI. Being public about how the algorithms run and how the model works doesn't necessarily mean security is a problem. Obviously real data sets shouldn't be published (huge GDPR issues there!) but I see no reason not to share the workings if the wider industry can learn from it.

Indeed, if there are security issues and these AI systems are compromised, it's more likely to be around some kind of data poisoning attack, and so openness about how data is validated and processed may actually help to ensure that any system weaknesses that are susceptible to poisoning can be fixed and the whole community can benefit.

Summary

I think it's fantastic that the BOE & FCA are looking to support the responsible use of technology in the financial services industry. I agree with the vast majority of their report and really hope they stick to their principle of not introducing regulation that could stifle innovation. In my view, they can still regulate perfectly well without having to police exactly how decisions are arrived at, whether that's by human intelligence or artificial intelligence.

The future is bright in this space, but adopters need to be careful and ensure all their stakeholders understand what information their AI models really give them and, just as importantly, do not give them.

Interested in Employing AI in Your Business? Contact Us Today

PowerPlanner is the trading style of PowerPlanner Solutions Ltd, a company registered in England and Wales, company number 8743976, limited by shares.

© 2023 PowerPlanner Solutions Ltd. All rights reserved.