UK cyber authorities raise alarm over AI chatbots in business operations

cyber

The UK’s National Cyber Security Centre (NCSC) has issued a warning about the security implications of using AI-driven chatbots in business settings.

According to the agency, these tools, including popular ones like OpenAI’s ChatGPT and Google’s Bard, can easily be manipulated to perform harmful activities.

NCSC’s warning comes on the heels of OpenAI and Google’s recent launch of a series of new AI tools designed for large enterprise solutions. Part of the risk arises from the technology being relatively new and not yet fully understood. The NCSC noted that our understanding of large language models (LLMs), which underpin these chatbots, is still “in beta,” meaning that the tech community hasn’t yet comprehensively grasped their capabilities, weaknesses, or vulnerabilities.

Security researchers have found that AI chatbots can be subverted by rogue commands and are susceptible to methods such as SQL injection and prompt injection attacks. These manipulations could, for example, trick a bank’s AI chatbot into conducting unauthorised transactions. Adding to this, inherent design limitations mean that chatbots cannot easily distinguish between instructions and data, making them potentially liable to manipulation.

“Instead of jumping into bed with the latest AI trends, senior executives should think again,” said Oseloka Obiora, CTO at RiverSafe. “Assess the benefits and risks as well as implement the necessary cyber protection to ensure the organisation is safe from harm.” RiverSafe’s CTO further cautioned that the rush to integrate AI tools could lead to “disastrous consequences” if companies fail to implement rigorous security measures.

The NCSC recommends that businesses keen on incorporating these AI tools should proceed with the same caution they’d exercise when using any other software in beta. They also suggest extensive testing of the chatbots through techniques such as social engineering to identify vulnerabilities, and then making risk assessments based on these findings.

The alert from NCSC coincides with OpenAI’s release of its new ChatGPT-4-driven enterprise business platform. Google is also not far behind, announcing the launch of 20 new AI tools targeted at enterprises, with more planned for small and medium-sized businesses.

A recent Reuters/Ipsos poll highlighted that corporate employees are already using AI tools like ChatGPT for mundane tasks such as drafting emails and conducting preliminary research. However, this frequent use, with an average of 36 inputs per day, also potentially puts sensitive corporate data at risk.

Copyright © 2023 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.