Artificial intelligence is becoming a common sight, but there are still fears around using the technology and ensuring it is not being misused.
In its paper, DreamQuark aims to identify the fears that need to be addressed, the principles needed to be promoted and the concreate applications for an AI that is trustworthy and ethical. The whitepaper also offers a vision of how companies can be best placed to handle the challenges of ethical AI.
The AI market is expected to be worth an eye-watering $8.3trn in the US by 2035, according to a report by Accenture. Furthermore, it is believed the sector will hit £2.1trn in Japan, $1.1trn in Germany and $814bn in the UK.
There is clearly a lot of interest and opportunity for the sector. However, there are fears associated to the technology. DreamQuark states there is concern that if AI is not controlled or its not questioned, it could evolve to be erratic and counter-productive. There are also questions of mistrust placed around the technology and how it works.
These fears need to be addressed and DreakQuark believes this is where ethics comes in. By leveraging this, companies can implement a strong value system and dispel any uneasiness.
It would also enables businesses to foster value creation and reach objections, whilst taking into account those affected by AI.
In its whitepaper, DreakQuark said, “Two options could be envisioned: either we simply forbid the use of AI, which means we deprive ourselves of all the positives it brings; or, we carefully experiment with AI, all while reducing its downsides. This means it is time to think about which applications of AI are ethical and which ones are not. An ethical framework is necessary, and it has to exist in a balance between free experimentation and cautious limitation.”
Download the whitepaper here
Copyright © 2018 RegTech Analyst