An Analysis of Alternative Approaches to Measuring Multinational Interoperability

Early Development of the Army Interoperability Measurement System (AIMS)

by Bryan W. Hallmark, Christopher G. Pernin, Andrea M. Abler, Ryan Haberman, Sale Lilly, Samantha McBirney, Angela O'Mahony, Erik E. Mueller

Download

Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 0.6 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Synopsis

FormatFile SizeNotes
PDF file 0.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback126 pages $14.50 $11.60 20% Web Discount

Research Question

  1. What alternatives exist for the Army's interoperability measurement system?

The National Defense Strategy (NDS) emphasizes the need for U.S. forces to be interoperable with capable allies and partners. To support the NDS, the U.S. Army develops and executes doctrine and guidelines for how its units can achieve interoperability with partners. The Army identified a need to develop an overarching concept for interoperability that includes explicit links between current Army multinational interoperability doctrine and mission command doctrine. Concurrently, it wanted an enduring and standardized way to measure levels of interoperability achieved as a result of major training events. To that end, the Army asked RAND Arroyo Center to conduct an analysis of alternatives (AoA) of interoperability measurement systems.

Researchers looked at eight different approaches, gathering and analyzing data from a review of materials provided by representatives for each approach and information from multiple rounds of interviews with representatives. No single approach addressed all dimensions identified as important for a future system, so a completely new approach was proposed, drawing on strengths and eliminating weaknesses from other approaches analyzed. The Army decided to develop a new system — the Army Interoperability Measurement System (AIMS), which includes a quantitative instrument for measuring interoperability levels, a qualitative component to enable capability gap analysis, an automated approach to connect and analyze the data, and exploitation panels that convene immediately following a training exercise.

The authors document their AoA, present the supporting evidence for their measurement system recommendations, and details the early development of AIMS.

Key Finding

  • No current option had all the characteristics that would be required by the Army's interoperability system.

Recommendations

  • The new measurement system should draw on strengths and eliminate weaknesses of other approaches, providing a more enduring and integrated interoperability measurement system. This system would fulfill the Army's need for a standardized and repeatable methodology to identify, evaluate, document, and organize interoperability issues with allies and partners; develop solutions; and communicate and execute those solutions with the Army's senior and operational leaders.
  • The new system should be computer- or web-based.
  • The Army should strive to reduce any additional personnel resourcing for the sole purpose of measuring interoperability.
  • Measures in the new system should look very similar to those that are already collected during training events.
  • The system should have both a quantitative and a qualitative data component with an embedded analytic capability that automatically calculates interoperability levels by priority focus area, ties levels to the qualitative data, and provides user-defined output to enable capability gap analysis.
  • The system should have a standardized format for quantitative data that allows them to be analyzed over time and across exercises and a flexible format for qualitative data to capture newly emerging challenges.
  • The system should have a component with measures that are as straightforward as possible, directly map to interoperability, and are aligned with doctrine to foster universal understanding.
  • The Army should develop a measurement, not an assessment system, and work to make sure that users and stakeholders are educated on the differences.

Table of Contents

  • Chapter One

    Introduction

  • Chapter Two

    Considerations

  • Chapter Three

    Analysis of Alternatives

  • Chapter Four

    Recommendations from the Analysis of Alternatives

  • Chapter Five

    Early Stages of Army Interoperability Measurement System Development

  • Chapter Six

    Concluding Remarks

  • Appendix A

    Analysis of Alternatives Questions List

  • Appendix B

    Completed Questions List for All Considered Alternatives

  • Appendix C

    ART Level I and Level II Tasks Included in AIMS Instruments

  • Appendix D

    Interoperability in Army Mission Essential Tasks

  • Appendix E

    Computing Priority Force Area Interoperability Levels

Research conducted by

The research described in this report was sponsored by the Office of the Deputy Chief of Staff, G-3/5/7, U.S. Army and conducted by the Personnel, Training, and Health Program within the RAND Arroyo Center.

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.