In the quest to minimise threats and maximise rewards, risk officers have become more reliant on artificial intelligence. But, while AI is increasingly being used to spot patterns and behaviours that may indicate fraud or money laundering — and, more controversially, to recognise faces to verify customer identity — its wider use to manage risk in institutions has been limited.

Now, though, the release of AI chatbots such as ChatGPT — which use “natural language processing” to understand prompts from users and generate text or computer code — looks set to transform risk management functions in financial services firms.

Some experts believe that, over the next decade, AI will be used for most areas of risk management in finance − including enabling new types of risks to be assessed, working out how to mitigate them, and automating and speeding up the work of risk officers.

“The genie is out of the bottle,” says Andrew Schwartz, an analyst at Celent, a research and advisory group specialising in financial services technology. More than half of large financial institutions are, at present, using AI to manage risk, he estimates.

Growth market

Conversational, or “generative” AI technologies, such as OpenAI’s ChatGPT or Google’s Bard, can already analyse a vast amount of data in company documents, regulatory filings, stock market prices, news reports, and social media.

That may help, for example, to improve current methods for assessing credit risk, or to create more intricate and realistic “stress testing” exercises — which simulate how a financial company could handle adverse market or economic situations — says Schwartz. “You just have more information and, with more information, there could be a deeper and theoretically better understanding of risk.”

Sudhir Pai, chief technology and innovation officer for financial services at consultancy Capgemini, says that some financial institutions are in the early stages of using generative AI as a virtual assistant for risk officers.

Such assistants collate financial market and investment information and can offer advice on strategies to mitigate risk. “[An] AI assistant for a risk manager would allow them to gain new insights on risk in a fraction of the time,” he explains.

Financial institutions are typically reluctant to talk about any early use of generative AI for risk management, but Schwartz suggests they may be tackling the critical issue of vetting the quality of data to be fed into an AI system, and removing any false data.

Initially, larger firms may focus on testing generative AI in those areas of risk management where conventional AI is already widely used — such as crime detection — says Maria Teresa Tejada, a partner specialising in risk, regulation and finance at Bain & Co, the global consultancy.

Generative AI is a “game changer” for financial institutions, she believes, because it enables them to capture and analyse large volumes of structured data, such as spreadsheets, but also unstructured data, such as legal contracts and call transcripts.

“Now, banks can better manage risks in real time,” says Tejada.

SteelEye, a maker of compliance software for financial institutions, has already tested ChatGPT with five of its clients. It created nine “prompts” for ChatGPT to use when analysing clients’ text communication for regulatory compliance purposes.

SteelEye copy-and-pasted the text of clients’ communications — such as email threads, WhatsApp messages, and Bloomberg chats — to see whether ChatGPT would identify suspicious communications and flag it for further investigation. For example, it was asked to look for any signs of possible insider trading activity.

Matt Smith, SteelEye’s chief executive, says that ChatGPT proved effective at analysing and identifying suspicious communication for further examination by compliance and risk specialists.

“Something that could take compliance professionals hours to sift through could take [ChatGPT] minutes or seconds,” he notes.

Accuracy and bias

However, some have expressed concerned that ChatGPT, which pulls in data from sources including Twitter and Reddit, can produce false information and may breach privacy.

Smith’s counter to this is that ChatGPT is being used solely as a tool and compliance officers take the final decision on whether to act on information.

Still, there are doubts as to whether generative AI is the right technology for the highly regulated and inherently cautious risk management departments in financial institutions, where data and complex statistical models must be carefully validated.

“ChatGPT is not the answer for risk management,” says Moutusi Sau — a financial services analyst at Gartner, a research company.

One problem, flagged by the European Risk Management Council, is that the complexity of ChatGPT and similar AI technologies may make it hard for financial services firms to explain their systems’ decisions. Such systems, whose results are inexplicable, are known as “black boxes” in AI jargon.

Developers of AI for risk management, and users of it, need to be very clear about the assumptions, weaknesses and limitations of the data, the council suggests.

Regulatory questions

A further problem is that the regulatory approach to AI differs across the world. In the US, the White House recently met with technology company bosses to discuss the use of AI before formulating guidelines. But the EU and China already have draft measures to regulate AI applications. In the UK, meanwhile, the competition watchdog has begun a review into the AI market.

So far, discussion about its regulation has focused on individual rights to privacy, and protection from discrimination. A different approach may be required for regulating AI in risk management, though, so that broad principles can be translated into detailed guidance for risk officers.

“My sense is that regulators will work with what they’ve got,” says Zayed Al Jamil, a partner in the technology group at law firm Clifford Chance.

“They will not say that [AI] is banned [for risk management] or be extraordinarily prescriptive . . . I think that they will update existing regulations to take into account AI,” he says.

Despite these regulatory questions, and doubts over generative AI’s reliability in managing risk in financial services, many in the industry believe it will become far more common. Some suggest it has the potential to improve many aspects of risk management simply by automating data analysis.

Schwartz of Celent remains “bullish” about AI’s potential in financial institutions. “In the medium term, I think we will see a huge amount of growth in what [AI tools] are able to do,” he says.



Source link

author-sign