Is AI helping improve the risk assessment process?

AI

Whilst its existence in the industry has been known for countless years, the impact and use of AI in financial services has ramped up massively over the past three years. In the area of risk assessments, how has AI changed this process?

According to Anna Shute, product manager at Qkvin, risk management is a crucial aspect of most regulated sectors such as financial services, especially in a time like today when the threats and uncertainties ever increasing.

She said, “Managing risks helps protect reputation, minimize losses whilst fostering growth and innovation. The process involves risk identification, & assessment, response and monitoring. In most organisations this process spans across various departments and people and is labour intensive.

“The risk assessment part of the process focuses on identification & evaluation of risks for impact, likelihood and the mitigation strategies. Typically, this part of the process is highly manual, people dependent and requires domain knowledge as well as organisation context and risk appetite. Artificial Intelligence has been in use for a long time in specific areas of risk assessment and mitigation such as credit risk modelling, fraud & anomaly detection. “

Despite this, Shute explained that recent developments based on transformer and large language models are having a substantial impact in this space. One particular development is accuracy to detect fraud. “The ability to process large amounts of unstructured data text, audio & video help process social media behaviours, online purchasing habits, and payment histories. These when combined with the traditional risk modelling and anomaly techniques help improve the accuracy of assessments of risk and fraud assessments,” said Shute.

Another is implications of regulatory change. These approaches are also helpful in the analysis of legal guidelines and documents and identifying relevant compliance issues. AI can help identify material changes and generate summaries and recommendations of these change impacts of the changes. Tools such as Compliance.AI help automate this process.”

There is also growth in dynamic automated insights, with tools such as Zest.AI helping to automate processes to make smarter and faster decisions – this, Shute explains, leads to improved customer satisfaction and overall performance of investment portfolios whilst managing risk effectively.

There is also upskilling and performance boosts. “These new technologies also enable operations teams to understand and assess risks and make better decisions. AI Agents across various products consolidate information, and can guide ops teams step by step, helping them upskill as well as improving productivity,” said Shute.

Meanwhile, Jon Elvin, Strategic Risk Advisor at Saifr, said, “Risk assessments form the blueprint and foundation for any successful regulatory compliance program. Artificial intelligence can be a tremendous, positive catalyst and efficient input to design, build, and maintenance.  When done well, it brings transparency and confidence to both management and regulatory stakeholders truly driving allocation of resources, effort, and control priorities.  Poor, incomplete, or those lacking accurate data and proof points leads to gaps in coverage, misalignment, and the potential for financial loss, violations of law, and intense regulatory scrutiny.

“Organizations where the risk assessment takes on a “check the box” and paper exercise once a year does not meet today’s expectations. It must be living, breathing, and constantly updated balancing new legal requirements, changes in business strategies, product mix, and customer portfolio characteristics.

“Regulatory enforcement orders and after-action reviews from firms that experienced significant exposures are often attributed to lack of line of sight, complacency of internal control execution, and failure to adapt. As it relates to an Anti-Money Laundering and Financial Crime Program, you will see the regulators focus the majority of their initial scoping and fieldwork based on the results and their interpretation of the thoroughness and internal control results. The combination of inherent risk recognition, control environment design and performance, and residual risk is their beacon driving regulatory intensity.

“AI can be a force multiplier adding value in several ways. First as a large canvassing tool contributing to the identification of new laws, regulations, guidance and helping compare existing written documentation and understanding. Second, institutions have no shortage on data, but certainly remain challenged with the ability to harness data completely and bring meaningful insights and actionable decision points timely. Machine learning, deep learning, and big data analytics techniques drive speed and thoroughness in a more complete data view allowing human decision makers to recognize changing dynamics amongst multiple interrelated variables and react.

“Third, it is the intelligence and value of earlier first-mover recognition as data changes versus legacy stale and lagging data insights that will add the most value opportunity. Humans alone cannot process such volumes and draw context quickly and consistently. Fourth, it can drive greater automation and efficiency of testing and audit of controls. This is a notable change to legacy practices of testing once every year or two. Conditions on the ground change too fast and AI automation can test and audit in real time allowing for intervention much earlier.

“In conclusion, AI advances should and will positively change the risk assessment generative process driving more dynamic, living, and predictive insights and results truly achieving an AI-assisted decision-making posture and confidence for risk professionals.”

Risks and reward

Meanwhile, Mohammad Mirzapur, senior machine learning engineer at Napier AI, explained that for financial crime compliance, AI can help in processing large volumes of data to classify transactions as fraudulent or indicative of financial crime and in false positive reduction in traditional rules-based workflows.

“However, using artificial intelligence also comes with several risks; especially in financial services and compliance. Reputation risk concerns related to AI use in financial institutions include potential biases in AI decision-making, which can lead to discriminatory practices in lending or hiring, and transparency issues, where the opacity of AI algorithms might cause mistrust among consumers if decisions cannot be easily explained and audited,” said Mirzapur.

Mirzapur stated that the data samples used to train and test algorithmic systems can often be inadequate and unrepresentative of the populations which they are drawing inferences from. As a result, he outlines, biased and discriminatory outcomes are possible because they’re based on flawed data.

“Biased features, metrics, and analytic structures for the models that enable data mining can reproduce, reinforce, and amplify the patterns of marginalization, inequality, and discrimination. Bias mitigation is essential, requiring a deep understanding of test and training data, and identifying underrepresented segments to ensure balanced datasets that avoid unwarranted correlations.

“Synthetic data sets can be used to enhance explainability and protect against potential bias risk – as it recreates the risky activity in isolation from geographic location, nationality, gender, or any other characteristic, and then distribute it through your data to ensure your models are learning the correct patterns,” said Mirzapur.

RegTech firm Ascent remarked that for risk assessments, AI can assist in several key areas. Setting the baseline of what is required, AI can help carry out differential analysis to identify risk, automatically apply scoring to risks to assist in prioritisation. From here, a remedy is suggested.

The firm said, “AI can help with categorization, identification, synthesis, summary, and information retrieval, and increasingly in ways that allow a natural interface between user and AI. This does come with some caveats, as the responsible use of AI to accomplish tasks requires safeguards against unintended bias, inaccurate or incomplete data, hallucinations, and other AI governance issues. This is where the use of an experienced, professional AI-powered software vendor can help companies gain the benefits of AI without needing to become AI experts themselves.”

Backbone of the industry

The role of risk assessments has long been vital to any companies’ risk management strategy, with Joseph Ibitola, growth manager at Flagright, claiming this as the ‘backbone’ of any solid strategy.

“But the traditional approach—manual, laborious, and often static simply doesn’t cut it anymore in today’s fast-moving digital landscape. This is where AI steps in, not as a replacement for human expertise, but as a powerful enhancement,” said Ibitola.

According to Ibitola, AI has redefined what it means to assess risk by making the process more dynamic, predictive, and proactive. Machine learning models can sift through vast amounts of data, spotting patterns and anomalies at a scale and speed that no human could ever match.

“For example, AI-powered fraud detection tools don’t just flag suspicious activity; they learn from each interaction, constantly refining their understanding of what constitutes a risk. This continuous learning means that AI-driven risk assessments become more accurate and effective over time, turning risk management from a reactive process into a proactive shield,” described Ibitola.

Ibitola gave the example of a FinTech company managing millions of transactions. Traditional methods, he said, would mean teams combing through data and looking for irregularities. But with AI, anomalies are flagged in real-time, and not only that, AI models can predict which transactions might become a problem down the line.

“Of course, AI is not without its limitations. It’s only as good as the data it’s fed and still requires human oversight to ensure it doesn’t generate false positives or overlook nuanced scenarios. But in today’s high-stakes environment, where agility is key, AI is proving to be a game-changer for risk assessments,” Ibitola commented.

AI and Gen AI

Michael Thirer, legal director at Muinmos, believes that while AI is helping to improve the risk assessment process, there is an important caveat to understand.

“There is AI and there is Gen AI,” he said. “Gen AI is currently a disruptive technology in more than one way. In the context of KYC, for example, it has the potential to reduce general levels of trust, as it provides new possible ways of falsifying identities, documents etc.

“AI, on the other hand, is a tool which is being used for many years now, and helps enormously in the risk assessment process. For example, it allows us to analyse vast amounts of data, uncover patterns and risks that might otherwise go unnoticed, make more precise decisions much faster, and more. In the context of Gen AI, it also allows to counter Gen AI assisted fraud.”

A key thing Thirer outlines for people to remember is that the use of AI needs to be in accordance with several basic principles, most of which appear is key legislation like the EU AI Act.

“Basically, AI needs to be used while maintaining transparency, explainability and accountability, so it can be safe and trustworthy.

“So yes, overall, AI is a valuable tool, that can make real-time risk assessments and help prevent financial crime, and luckily the regulatory and guidance framework for its use has matured and provides us with good guidance as to how to use it responsibly,” he said.

Role in compliance

Chor Teh, director, financial crime compliance industry practice lead at Moody’s, remarked that AI is playing an increasingly vital role in enhancing the risk assessment process, particularly in the compliance sector.

According to a survey conducted by  Moody’s in the summer of 2024, a growing number of Fintech firms are actively using AI for compliance, when compared to a Moody’s study from November 2023. This increasing popularity, Teh stated, can be attributed to AI’s ability to analyse large datasets swiftly and accurately, helping organisations to identify and mitigate risks more effectively.

“A key factor in this is Entity Verification, which has emerged as essential for improving AI’s accuracy in risk and compliance activities. Moody’s study shows that 27% of risk and compliance professionals surveyed view Entity Verification as critical to enhancing the precision of AI, with a further 50% recognising its value in improved AI accuracy.

“By ensuring AI systems rely on validated, accurate data, Entity Verification mitigates risks such as “hallucinations”, where AI might generate incorrect or misleading information. This is especially crucial when assessing business relationships and compliance risks, with these processes entirely dependent on accurate information,” concluded Teh.

Copyright © 2024 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.