Treasury report unveils AI’s potential risks and rewards for financial sector cybersecurity

AI

In a recent development that highlights the challenges and opportunities presented by AI in cybersecurity and fraud prevention, the US Treasury has released a report.

According to Cyberscoop, this document is a response to President Joe Biden’s AI executive order and draws upon insights from interviews with 42 entities across the financial services and technology sectors. While the report refrains from imposing new cyber-related mandates or explicitly recommending the adoption or avoidance of AI in financial services, it casts a spotlight on AI’s potential to both exacerbate and combat fraud.

Under Secretary for Domestic Finance Nellie Liang emphasised the transformative impact of AI on cybersecurity and fraud within the financial sector. “Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden administration is committed to working with financial institutions to utilise emerging technologies while safeguarding against threats to operational resiliency and financial stability,” Liang stated. Her remarks underscore the Treasury’s vision for financial institutions to navigate AI’s opportunities and threats meticulously.

The report identifies the acceleration of cyber-enabled fraud as a significant concern, driven by the evolving accessibility of AI tools which may initially give cybercriminals an upper hand. To counteract this, it encourages financial institutions to bolster their risk management and cybersecurity frameworks, integrate AI solutions more comprehensively into their security practices, and promote collaboration and threat information sharing.

Drawing parallels with traditional IT system protection strategies, the report advocates for financial institutions to adopt best practices that reflect the advanced capabilities of AI systems. It notes the alignment of some financial institutions’ practices with the National Institute of Standards and Technology’s AI Risk Management Framework, albeit acknowledging the challenges in establishing practical, enterprise-wide policies for emergent technologies like generative AI.

The document also sheds light on the disparity in capabilities between larger firms, which are actively developing in-house AI systems and risk management frameworks, and smaller firms, which often lack the IT resources to do so. This “widening capability gap” could potentially hinder the smaller entities’ ability to effectively utilise AI in fraud prevention and cybersecurity.

Furthermore, the report elaborates on AI’s role in enhancing existing anti-fraud and cybersecurity measures, allowing for more sophisticated analysis and proactive security postures. It calls for the establishment of a common lexicon around AI tools to facilitate clearer communication between financial institutions, third parties, and regulators, as well as the development of best practices for data supply chain mapping and data standards.

In conclusion, the Treasury Department’s report not only warns of AI’s potential to amplify cyber threats but also highlights its critical role in advancing cybersecurity and fraud prevention within the financial sector. It signals a commitment to collaboration with industry stakeholders to navigate these challenges effectively.

Keep up with all the latest FinTech news here.

Copyright © 2024 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.