Many RegTech solutions have artificial intelligence at the core of their solutions, but the founder of Ascent has warned that they need to be aware of the ethical pitfalls using the technology can entail.
The late Sir Terry Pratchett once wrote that real stupidity beats artificial intelligence every time. While he may have offered the nugget as a joke, it does essentially encapsulate an inherent problem with AI – when used unethically it is usually because the coder is at fault, because the people behind it has done something wrong. Essentially, unethical AI is a human problem.
That was one of the core messages delivered by Brian Clark, founder and CEO of RegTech startup Ascent, in a recent talk about why ethical AI matters. The talk was part of an ongoing series launched by the startup called Ascent Virtual Open House.
The initiative was launched to offer people a virtual space where they can learn new things and stay connected in the these days of social distancing brought upon us by the coronavirus. As part of the push, Ascent will host a new talk about regulations or technology every week through July to give people free insights during these trying times.
Clark kicked off the initiative by talking about the role ethics play in the rollout of AI-powered solutions. “There’s a lot of work being done both publicly and privately in artificial intelligence,” he said.
The abundance of these type of initiatives have certainly helped businesses in finance better predict and analyse trends or made it easier and faster for pharmaceutical ventures to create medicines.
In the RegTech industry, AI is used for everything from ensuring legal compliance to strengthening cybersecurity capabilities. Ascent is one of the companies that are using AI to automate compliance for its clients, which has earned the startup a spot on the coveted RegTech100 list of the 100 most innovative RegTech solution providers in the world.
“There are a lot steps to analysing laws and regulations, understanding changes [and] reviewing data that are very time intensive and has a lot of human error,” Clark explained. “Instead, we don’t use AI because it’s sexy or because it is good for our marketing campaigns. We use AI because it is a tool that solves our problem [and] it solves it better than humans.”
However, some have been quick to equate the growth of the market with the rise of Skynet, the evil super computer from the Terminator movie franchise, or at least an AI that can independently gather data of the world around it and make erroneous and unethical decisions based on that data.
“You hear it in the news a lot that artificial intelligence is a concept that is pushed out with an bit of an alarmist view,” said Clark. “[Fortunately,] that is much further off into the future. That’s general AI.”
There are thinkers in the field that there will come a time when the so-called singularity, the phrase used to describe the rise of a super AI, will irrevocably transform human society as we know it. Although, the singularity is only expected to occur sometime between the 2040s and the 2060s, depending on who you ask. So, its still some time away.
What Clark was referring to when he talked about AI was domain-specific AI – tools and algorithms used to solve particular problems based on the parameters set out by the programmer and the data put into the program.
Yet, there are potential ethical problems with this too that people developing these solutions should keep in mind, Clark warned.
For instance, businesses using AI must consider what Clark referred to as the unity problem. This challenge boils down to what is often seen as the big benefit of AI – that individuals have more power. Even though that means one person can do more good more easily, the reverse is sadly also true. “You [have to] think about [how] one bad actor who can influence a million lives,” Clark warned.
Another concern raised about the rise of AI would destroy the labour market as many people’s jobs would be rendered obsolete. Clark pointed out that this has been a concern ever since the rise of the industrial revolution and any other time new technologies have arrived on the market.
Indeed, the etymology of the word saboteur originates from the wooden shoes factory workers in France wore in the 19th century when they protested how early machines were stealing their jobs.
Yet, Clark pointed out that it is not the role of capitalism to protect jobs, but to build progress.
That being said, he was aware that AI could exacerbate inequalities in society as the people who can create and utilise the technology usually have stronger financial muscles than many workers, which also means that these people are the ones who will reap the rewards of AI.
Another concern Clark wanted people to be aware of was that, while many workers will eventually be able to transition into new roles, retraining them will take longer than launching and implementing new AI solutions.
Clark also acknowledge the ethical issue with bias. “Bias isn’t inherently bad by itself, but it is inherently bad when you’re making decisions without understanding what that bias was,” he said.
As an example, he pointed at Tay, a Twitter chatbot launched by Microsoft in March 2016. The AI-powered program was fed data from its interactions with other users on Twitter. Within a day, Tay had unfortunately learned to tweet racist and misogynist messages. Microsoft pulled the plug on Tay within 24 hours.
Clark explained that the data used created a bias that made Tay express thought and opinions using “less than conventionally accepted language” that “trained the algorithm to be very biased [and], in particular, very discriminatory.”
However, he pointed out that this is more due to the datasets these machines have to learn from than from the actual program. “So it is really important to understand your dataset, where you got it from, what are all the context data around that bias is and how it matters to the decision you are trying to make,” he said.
Ascent has avoided this particular problem by creating all of its algorithmic development work in-house instead of using open-sourced and outside data as it does create unforeseen bias.
Like noted at the top, it all boils down to the human factor. “Machines are not going to do anything we don’t tell them to do,” he said. “The problem will be if we program them improperly.”
Understanding this distinction and the issues outlined would enable businesses and developers to use AI in an ethical and optimal way.
Copyright © 2018 RegTech Analyst