Exploring the Impact of Generative AI on Compliance

Exploring the Impact of Generative AI on Compliance

For the past five months, it has been difficult to avoid talk of generative AI. The technology has captured the imagination of people and they’re either optimistic about how it will transform business productivity or see them out of a job. The question is, how will it transform compliance?

Compliance processes require 100% accuracy, with one mistake potentially costing a company thousands of dollars in fines. With such pressure, will firms rely on generative AI to handle related workflows? Cognitive View founder and CEO Dilip Mohapatra likened generative AI to the revolution caused by the PC and then of the internet.

Mohapatra said, “Since this is such a big shift, it’s very important for firms to really understand the technology, because the change is happening at such a rapid pace that you don’t want to miss the wave. But at the same time, you want to learn, understand it and see where the opportunities are, and where the risks are.”

Firms should be asking themselves a plethora of questions about AI. For example, what type of transformation would it bring for end customers, are customers expecting to leverage the technology, what impact could it have on business dynamics, what is the strategy, how can it streamline processes and more. There is no time to wait and see what others do, firms need to be proactive. Mohapatra said, “This is a once in a lifetime type of change that we are seeing. And it’s going to continue in a much faster space, from here on.”

Will it replace humans?

As much as people talk about the potential of the technology, it is equally matched by a doom and gloom approach to human survival. With generative AI capable of instantly creating entire articles, artworks and code, it is natural that people will get scared and think their job is about to be replaced. However, this is not necessarily the case. Mohapatra explained that while generative AI will automate a lot of tasks and some jobs will be lost, the technology will also create a whole host of new jobs and opportunities.

“We will always embed humans in the loop in our technology,” he said. Companies cannot afford to make mistakes when it comes to compliance, so they will always need that human to fact check. “When the regulator comes and knocks on the door, we need to be able to explain why we have made that decision. Or if we have used the technology to automate that process, we must show we fully understood the AI’s decision-making process.”

A potential future for the technology would be an augmented AI solution. So instead of a worker having to draft emails, build basic outlines of specific incidents or breaches or listen to phone calls for compliance or management needs, they could just leverage the AI to automate the tasks. The technology can replace mundane tasks and even help an employee get the basics finished quickly before they take over and do more advanced work.

The next area that Mohapatra sees the technology improving workplaces is through compliance risk. He explained that humans are just not capable of reviewing every specific issue as there are just not enough hours in the day. Instead, an AI solution could come in and assess everything, highlighting the ones a human needs to investigate further. Not only does this save time for compliance teams, but improves their risk posture. Then in the audit space, it would allow teams to have a continuous auditing system, rather than relying on random batch sampling.

But what are the risks?

It is easy to get lost in the potential of the technology and forget that it still brings risk. This is a new and emerging technology and regulators are starting to explore greater regulation. Mohapatra urged firms to experiment with the technology as this is the only way to learn what it can do and its limitations. However, he stressed the need to still have some guidelines and governance framework around its usage within business.

Some companies are using the technology to draft emails or even enter some sensitive data on to tools like Chat GPT. While this can help boost efficiency, it can come with some significant risks, for example, this sensitive data could get leaked. “It’s important to implement some governance and guidelines so staff know what to do and what not to do, but still give them an opportunity to learn and experiment.”

In addition to the guidelines, Mohapatra has said firms should look to leverage large language models (LLM). This is something Cognitive View has done itself. These models help protect data, privacy and sovereignty. On top of this, when the models are learning, they explain what they are doing, allowing greater clarity and more guardrails to protect sensitive information.

On a final note, Mohapatra also said that firms should educate their employees to know generative AI can be wrong. This technology is not infallible so if it is being used for critical tasks, everything needs to be fact checked to ensure the right decision is being made.

Inbound regulations

While companies are eagerly embracing AI and generative AI tools, regulators are keeping an eye on the market and many are looking to increase legislation around the technology. In the UK, for example, the government recently released a whitepaper outlining the five principles for the safe use of AI within business. These principles are safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress. Around the same time as this, the FCA, PRA and Bank of England released their own consultation around the use of AI, including its own set of principles. Regulatory bodies around the world are taking similar measures, such as the EU’s proposed AI Act.

Generative AI could soon be the focus of regulation. Over 1,400 tech leaders recently expressed their concerns around the advancement of AI technology, with them coming together to sign a letter to pause the training of advanced AI models due to the fear it could majorly impact society. Those signing the letter included Elon Musk, Steve Wozniak and Evan Sharp.

Mohapatra believes regulations around AI are essential, as while the technology brings a lot of new and exciting opportunities, it also brings a lot of risk. However, he was not sure halting development is necessary for lower risk use cases, for instance, simply automating mundane tasks or helping employees with routine work that does not pose security threats.

“I’m not in favour of those who are saying we need to pause the development for six months. I don’t think that’s the right strategy, because the development and pace of innovation will continue, regardless of whether you want to stop it or not. The train has already left the station, so it is impractical to do that. However, putting in more guardrails and identifying the higher risk areas that need more stringent controls is important.”

One of the controls that Mohapatra is keen for is accountability. He stated that there needs to be a standard that ensures AI does not have a specific bias around gender or race. “We don’t want technology to dominate and create segregation between society.” To ensure that this is upheld, there needs to be accountability for the AI. This means both the developer of the AI and the user should be held responsible for how the technology is used.

Regulations around AI are a case of when, not if. While some companies might want to wait for more guidelines, Mohapatra believes firms should not waste time. “There are already some good guidelines about how to start using this technology. Regulators are always a bit slow to catch up and a lot of the time regulators are reactive rather than proactive.” This slow response to emerging technology can be seen with cryptocurrency. While digital tokens have been widely used for many years, regulators around the world are still trying to get their heads around the technology and decide what the best legislation would be.

Cognitive Views Gen AI solution

Cognitive View is not a company to wait on the sidelines when it comes to innovation, which is why it recently launched its own tool that makes use of generative AI. Last month the company launched a generative AI tool that can transform how companies deal with customer complaints. Cognitive View trained the AI on large datasets of customer complaints, enabling it to understand the common themes and issues that customers face.

The tool creates a summary of a customer’s complaint by analysing the grievances in accordance with the company’s guidelines and regulatory requirements. This results in a comprehensive understanding of the nature of the complaint, insights on similar complaints within the industry, and an assessment of potential compliance issues.

Beyond the immediate resolution of complaints, the solution can identify trends and patterns in customer complaints. By proactively pinpointing areas of compliance gaps, businesses are given the opportunity to address these issues head-on. The insightful data provided by the tool can also foster product improvements, modifications to the customer service process, or even updates to company policies.

The driving force behind this solution came after the Cognitive View team assessed the typical day for a compliance professional. Their goal was to understand where most of their time was spent and how technology could alleviate that time burden. One of these areas was around customer complaints.

One of the biggest issues faced by companies is that their data is all over the place, forcing workers to search across emails and various other communication methods to understand the full nature and context of a complaint. Instead, the generative AI tool can cross reference all datasets to get an accurate picture of the complaint, summarise this for the employee and then offer them potential next steps. On top of this, the tool can show trends about the most common complaints in a month, for example.

A boon of the generative AI tool is that it can understand context. Every business is different, and each complaint is different. This means the technology needs to be able to adapt to relevant parameters, which is something Cognitive View’s tool can do. The solution is trained to follow a company’s specific guidelines and can trace a complaint across various communication models to ensure everything is captured in a summary.

“The way we are looking into generative AI is not only its ability to understand the complex regulatory requirement, but its ability to also understand your business. It understands the product you’re selling, your existing processes, and then it tries to find the match to say ‘okay, so this regulation needs to be applied this way within your operations.’ Our technology has been built to act as a co-pilot. It saves you a lot of time, reduces compliance risk, and drives customer experience.”

This generative AI tool is the first of many. The company is currently focused on three core areas. The first is around building an improved user experience where users can talk to a bot to get a quick and insightful picture of current business operations. The second area is to build an internal audit tool that can automatically audit so the compliance team can focus on more high-level tasks. The final area is using AI to get deep insight, whether that is from complex regulatory comments, internal datasets, or general business operations.

As a final thought on the topic of generative AI, Mohapatra said, “This space is evolving, and everyone is trying to get their head around it. It is key to experiment, because that learning is going to make a huge difference for companies.”

Keep up with all the latest RegTech news here

Copyright © 2023 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.