Operationalising ethical AI: The strategic drive of the DIU

Operationalising ethical AI: The strategic drive of the DIU

In a strategic move, the Defense Innovation Unit (DIU), an organisation established to foster the incorporation of commercial technology within the Department of Defense (DoD), launched an initiative in March 2020. Its aim: to weave DoD’s Ethical Principles for Artificial Intelligence (AI) into its commercial prototyping and acquisition programmes.

Quantifind, a provider of AI-driven risk intelligence automation for leading organisations, has released a report exploring what the RAI Guidelines mean and how to implement ethical AI.

DIU took inspiration from the best practices across government, non-profit, academic, and industry sectors. The outcome was the development of Responsible Artificial Intelligence (RAI) Guidelines.

These RAI Guidelines are essentially a set of specific queries to be tackled during each phase of an AI project’s lifecycle: planning, development, and deployment. These guidelines offer a roadmap for AI companies, DoD stakeholders, and programme managers to align AI programmes with DoD’s Ethical Principles for AI. As such, they help to ensure fairness, accountability, and transparency are considered at every development stage. The RAI Guidelines are currently being implemented across a variety of projects spanning areas such as predictive health, underwater autonomy, predictive maintenance, and supply chain analysis.

The limitations of the RAI Guidelines must also be acknowledged. They do not offer blanket solutions for all challenges, such as biased data, ill-chosen algorithms, or poorly defined applications. In certain situations, if a proposed system for national security does not meet the criteria for responsible deployment, discontinuing the AI capability should be an acceptable result of following the RAI Guidelines. The RAI Guidelines should be seen as an addition to a company’s existing internal ethics review and related testing and evaluation (T&E) procedures.

Throughout the implementation of these RAI Guidelines, we have unearthed key insights for each phase of the AI development lifecycle. These insights cover the gamut from planning stages, such as defining tasks and metrics, and identifying stakeholders, through to the development phase, including data security and system auditing, and finally the deployment phase, which involves continuous validation and harms assessment.

The RAI Guidelines serve as a crucial first step in operationalising the DoD’s Ethical Principles for AI. Moving forward, DIU is committed to continuous collaboration with experts and stakeholders across the government, industry, academia, and civil society sectors, in order to further enhance the RAI Guidelines.

Quantifind’s report provides an overview of the RAI Guidelines as a result of DIU’s efforts to bring the DoD’s Ethical Principles for AI into its prototyping initiatives.

It also offers detailed case studies illustrating the value of the RAI Guidelines in practice, identifying specific lessons learned from these efforts.

Accompanying this article is a set of instructive materials in the Appendix, aimed at personnel across DoD to apply DIU’s RAI Guidelines to their technology development or acquisition programmes, or to inform future research in ethical AI.

Read the full report here.

Keep up with all the latest FinTech news here

Copyright © 2023 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.