Overcoming hurdles to adopt AI in financial crime detection

Overcoming hurdles to adopt AI in financial crime detection

Amidst growing acknowledgment of AI’s prowess in uncovering fraud and financial crime, there remains a gap between those financial institutions (FIs) recognising its potential and those implementing it.

Wolfgang Berner, the co-founder and CTO/CPO of Hawk AI, addressed the roadblocks hindering the trust in AI at the recent CeFPro Fraud & Financial Crime conference held in New York. He highlighted the actionable steps FIs could undertake to surmount these challenges.

A primary barrier FIs face in trusting AI in financial crime detection is its perceived opacity. The AI’s decision-making process often appears as a black box to FIs, making it difficult to comprehend its workings fully. Hawk AI’s CTO/CPO Wolfgang Berner explained, “There’s a lack of transparency in output, but also in how you get the output.” FIs’ accountability for the AI model’s decisions necessitates a thorough understanding of its decision-making process. Additionally, the absence of clear governance protocols and the complexity of AI models adds to the hesitance among FIs to adopt AI.

However, a potential solution comes in the form of the Wolfsberg principles. These guidelines, formulated by 13 leading global banks, aim to ease the apprehension around using AI to detect money laundering and terrorist financing. The principles underscore the need for transparency in AI algorithms, coupled with clear explanations of their decision-making processes. They also emphasise the importance of governance, urging FIs to establish definitive policies for AI’s usage in detecting financial crime and fraud. Adherence to these principles can bolster FIs’ confidence in employing AI technology.

Incorporating explainability into AI models dedicated to detecting financial crime is crucial to ensure transparency and accountability. Investigators can derive essential contextual information from features like natural language narratives, decision probabilities, and verbalised decision criteria integrated into AI models. Berner dismissed the common critique of explainable AI, asserting that complex models could provide explainable output with ingenious engineering and domain expertise.

The second pillar of explainable AI is Model Governance and Model Validation. Model Governance encapsulates elements such as Traceability and Versioning, Tooling to Support Automation, and Customer Acceptance. Furthermore, Model Validation involves setting model benchmarks in collaboration with FIs and AI developers. Berner stated, “You need to look at KPIs that are relevant for the customer. You establish the KPIs and then you monitor it over time.”

As evidenced by the Wolfsberg Principles and the Whitebox AI pillars of Explainability and Model Governance & Model Validation, it is feasible to build AI models that are transparent, interpretable, and readily audited by human operators. As Berner concluded his presentation, “Whitebox AI isn’t just possible, it’s here.”

Read the story here.

Keep up with all the latest FinTech news here

Copyright © 2023 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.