The U.S. Department of Defense (DoD) is investing heavily in the development of machine learning (ML) algorithms to assist in many decisionmaking processes. This report provides policymakers and developers of ML algorithms with a framework and tools to produce algorithms for personnel management that are consistent with DoD's equity priorities.
Advancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine Learning
Download
Download eBook for Free
Full Document
Format | File Size | Notes |
---|---|---|
PDF file | 1.1 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Research Summary
Format | File Size | Notes |
---|---|---|
PDF file | 0.1 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Purchase
Purchase Print Copy
Format | List Price | Price | |
---|---|---|---|
Add to Cart | Paperback78 pages | $21.00 | $16.80 20% Web Discount |
Research Questions
- What are DoD's equity goals?
- Which developing ML technologies for personnel management interact with these goals?
- How do these equity goals compare with technical definitions of equity found in the literature?
- How can DoD develop algorithms that meet both DoD's equity goals and broader institutional objectives?
The U.S. Department of Defense (DoD) places a high priority on promoting diversity, equity, and inclusion at all levels throughout the organization. Simultaneously, it is actively supporting the development of machine learning (ML) technologies to assist in decisionmaking for personnel management. There has been heightened concern about algorithmic bias in many non-DoD settings, whereby ML-assisted decisions have been found to perpetuate or, in some cases, exacerbate inequities.
This report is an attempt to equip both policymakers and developers of ML algorithms for DoD with the tools and guidance necessary to avoid algorithmic bias when using ML to aid human decisions. The authors first provide an overview of DoD's equity priorities, which typically are centered on issues of representation and equal opportunity within personnel. They then provide a framework to enable ML developers to develop equitable tools. This framework emphasizes that there are inherent trade-offs to enforcing equity that must be considered when developing equitable ML algorithms.
The authors enable the process of weighing these trade-offs by providing a software tool, called the RAND Algorithmic Equity Tool, that can be applied to common classification ML algorithms that are used to support binary decisions. This tool allows users to audit the equity properties of their algorithms, modify algorithms to attain equity priorities, and weigh the costs of attaining equity on other, non-equity priorities. The authors demonstrate this tool on a hypothetical ML algorithm used to influence promotion selection decisions, which serves as an instructive case study.
The tool the team developed in the course of completing this research is available on GitHub.
Key Findings
- With respect to DoD's equity goals, the authors identify three principles that may be linked to mathematical notions of equity: (1) career entry and progression should be free of discrimination with respect to protected attributes, including race, religion, or sex, (2) career placement and progression within DoD should be based on merit, and (3) DoD should represent the demographics of the country it serves.
- Although DoD uses ML technologies in several sectors (e.g., in intelligence and surveillance), the authors find that DoD does not currently rely on ML technologies in personnel management. However, DoD has started investing significantly in researching ML algorithms for personnel management. Developing a framework and tools for considering equity now can help mitigate potential pitfalls in the deployment of ML algorithms for personnel management.
- There are three important lessons from the algorithmic fairness literature that guide the design of our framework and the RAND Algorithmic Equity Tool for developing equitable ML algorithms. First, there are many definitions of equity, each with subtly different implications. Second, it is generally not possible to attain multiple types of equity simultaneously. It is typically the case that attaining one form of equity necessitates violating another. Finally, enforcing an algorithm to behave equitably typically comes at the cost of other performance priorities, such as overall predictive accuracy.
Recommendations
- DoD should audit algorithms that pose an equity risk. Algorithms that are used to aid high-stakes decisions about individuals must be audited to ensure they are meeting the equity goals for their particular application. This includes auditing both the performance properties of algorithms and the data used to train them.
- DoD should increase the specificity of equity priorities. Both auditing and enforcing equity priorities in ML algorithms necessitate translating those priorities into concrete, mathematical concepts. Current DoD equity policies typically lack adequate specificity to perform this translation. The authors recommend that DoD consider moving toward more concrete language in specifying its equity goals. To do so, DoD should consider adopting equity definitions developed by the algorithmic fairness literature.
- DoD should consider using ML as an aid to human personnel management decisions. Although ML algorithms threaten to introduce algorithmic bias, the authors do not believe that the alternative of human-only decisions is preferable. The ability to both audit and constrain an ML algorithm to meet equity priorities is a considerable strength over a human-only decision process.
Table of Contents
Chapter One
Introduction
Chapter Two
The Department of Defense's Investment in Equity
Chapter Three
Machine Learning as an Aid to Decisionmaking
Chapter Four
Approaches to Auditing Machine Learning or Constraining It to Be Fair
Chapter Five
A Framework for Developing Equitable Machine Learning Algorithms
Chapter Six
Demonstration of Equity Framework Through a Hypothetical Case Study
Chapter Seven
Conclusions
Appendix A
Overview of Technical Equity Definitions
Appendix B
Technical Description of Post-Processing Methods
Appendix C
Machine Learning in the Department of Defense for Nonpersonnel Issues
Research conducted by
Funding for this research was made possible by the independent research and development provisions of RAND’s contracts for the operation of its U.S. Department of Defense federally funded research and development centers. The research was conducted by RAND Project AIR FORCE.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.