Regulators crave more transparency, technology must cater to the demand

Regulators are asking for more transparency with decision making and technology must have glass windows to adhere to these needs, according to a panel at the Global RegTech Summit.

The panel consisted of AutoRek head of the banking sector Hugh Burden, Compliance.ai co-founder and CEO Kayvan Alikhani, HSBC head of data change (risk and finance) Randeep Singh Buttar, and GRC Solutions director Owe Lie-Bjelland. The group discussed the impact of RegTech on the risk management model.

AutoRek’s Hugh Burden chaired the panel and began the conversation by asking what underlying technologies were becoming more prevalent in the market and are posing the most promise for the future.

It comes as no surprise that AI and machine learning were the top of the speakers lists as exciting technology. Over the past few years it has dominated the industry, with it capable of automating time-consuming manual tasks and improving the ability to connect data together to generate deeper insights. At this point it is weird to think there would be any company out there that has not looked at integrating a piece of AI-powered software.

While the industry is euphoric about its endless possibilities, not everything is sugar and rainbows, and there are still some challenges that need to be addressed.

Compliance.ai offers regulatory intelligence to help companies within the financial sector receive the latest updates, insights, news, announcements, and intelligence on regulations. Company co-founder Kayvan Alikhani said that machine learning processing can offer a business a vast amount of data in a matter of seconds, pull out key information and use this to give risk managers renewed insights on the organisation. However, this technology cannot simply be a generic.

The reasoning for this is that a bank like HSBC will not have the same risk profile or risk management needs as that of a community bank operating in a single US state, or a broker dealer. They all focus on different topics.

He said, “From that perspective, the new and up and coming technologies are more of a multi-tiered approach to learning where from one level you’re providing some standard answers as far as patterns and insights and extracted classifications, and at a different level becomes more prescriptive about the company themselves. What’s your posture? What’s your appetite for risk? How do you typically process this? What’s been your history from a risk assessment perspective? This is what we’re seeing this time and time again talking to CRO’s and CCO’s.”

Making the technology more adaptable and configurable to a specific company’s risk style is not the only challenge facing RegTech companies. Ever since the financial crisis of 2008, regulators have been pushing more focus on transparency and accountability. Regulators want to be able to track things with more scrutiny to avoid another large-scale crash. Just because a piece of technology can automate things and have high levels of accuracy, it’s still not safe from errors. This means the AI needs to be transparent in the decisions it comes to. more and more

He added, “If you’ve helped HSBC or any other bank to make an assessment, from the auditor, examiner, regulator and the larger bank’s perspective, more and more they look for transparency in that decision making. How did you come up with this automated decision? What was the model? What was the percentage of time that humans were used in helping with making that assessment and that kind of a glass door approach is becoming more and more of a must have from a technological perspective.”

HSBC’s Randeep Singh Buttar agreed whole heartedly with the fact regulators are asking for more transparency. It does not just stop there though. He feels regulators are looking for a ‘lot more grain’ in terms of the data they are asking for. This means they want to know how a company comes to its decisions and the lineage of the data itself, among many other aspects.

Some of the technologies he sees to start coming to fruition will be text analytics, natural language processing, and the semantic mapping of data. Pioneering the next phase for technology will be robotic process automation (RPA), according to Buttar. Putting the technology in the risk management framework at a bank, there are various chief risk officers on a per country/per region basis. They need to meet up frequently to monitor the risk appetite of the bank, looking at whether they are meeting their targets and the status of all the varying risk types, such as market, credit and operation. Typically, this leads to teams frantically running around trying to collate and aggregate data to help them make these decisions. While RPA can come in and handle the data processing procedure, removing this human cost.

He said, “The reason why I like semantic technology, graph stores, triple stores and the rest of it is purely because, a bank like HSBC, our challenge isn’t that we don’t have structured data, we’ve got tons of it. Our challenge is how do we stitch it all together. When you consider the fact that using a graph store and RDF format, you can actually get data from any source, structured or unstructured, build a relationship across various datasets and you end up in a situation where, gone are the days of sort of primary foreign key links between relational databases. So, everything becomes flattened and I guess those of you who are you familiar with the Panama papers, I mean that’s exactly how they were able to draw their inferences, was using similar graph technology. So, I’m quite keen on that just to unlock the potential within our data in that a whole host of places.”

As stated before, technology is not always correct. Mistakes can be made, whether it’s caused by an error in the code or incorrect data being monitored, technology is not omnipotent. But, how can you make technology accountable? A bunch of metal and cables cannot be reprimanded for causing issues, it cannot be fired, it cannot even be guaranteed it will not make the same mistakes again.

Buttar said, “I think AI should be regulated, AI machine learning, but the typical response to that is no way, how are you going to do that? And, I think the conversation just needs to begin. And I think it requires involvement between governments, startups, those in academia because you’re absolutely right. Things like unconscious bias are very dangerous. And we deal with people, we have a responsibility to society. We need to make sure that we’re not doing any harm.”

Regulations like the Senior Managers & Certification Regime (SM&CR) are putting greater emphasis on accountability and people taking personal responsibility for their actions. If it is one rule for humans making decisions, it cannot be another for technology. This is an increasingly popular question, with another panel at the RegTech Summit bringing up the same calls for the liability of technology. As ever, opinions are split on whether it is something that should be done.

Alikhani, added, “You cannot hold machines responsible. Ultimately the accountability is with the companies. You can’t point to the machine and say the machine did it. So, we are seeing more and more of a need for human involvement, what we call an expert in the loop. An ongoing involvement to be able to show a healthy percentage of decisions that were reviewed by humans. It doesn’t have to be 100%, but it has to be a percentage of decisions and as long as you show that on an ongoing basis, I think regulators are saying that they’re going to be happy, we’ll see. But I agree with the regulatory requirement for AI as well.”

Copyright © 2019 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.