You are here

xTechScalable AI 2

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Artificial Intelligence/ Machine Learning; Trusted AI and Autonomy; Advanced Computing and Software; Human-Machine Interfaces

 

OBJECTIVE:

Topic 1: Scalable Tools for Automated AI Risk Management and Algorithmic Analysis

 

As the Army deploys Artificial Intelligence (AI) systems, there is an inherent risk that the AI model could fail to perform as expected.  AI algorithms are complex and have many factors that can affect their performance; some of which include Malware, Data Poisoning, Model Evasions, Mode Inversions, and Deepfake attacks. These factors could lead the AI model to make incorrect inferences which could have significant mission impacts.  The Army seeks to develop automated tools to evaluate AI system risk.  Specifically, the army is looking for new methods to evaluate, quantify, and mitigate risk against an AI Risk Management Framework (RMF) to ensure deployed AI models are trusted and validated. Tools also need to be automated to reduce the cognitive workload required from the war fighter to validate AI model factors against an AI RMF. This need extends across multiple modalities and model types, to include imagery, synthetic aperture radar, large language models and radio frequency data. There are multiple challenges for quantifying AI risk in the DoD domain, this effort is meant to begin addressing some of those challenges – build a baseline characterization of risk-related performance of pre-trained models, develop preliminary DoD-specific benchmarks for a set of DoD-related tasks/prompts, and document the divergence that occurs with fine-tuning steps by factors such as model type, data modality, and inference engine. The Army is aware of existing open-source commercial tools related to cybersecurity and AI risk management. However, an automated tool that adapts commercial experiences and open-source methodologies for military use is needed as testing and evaluation for the resulting tools will be derived from Army use cases.

 

The Army will accept proposals on any AI RMF challenge requiring the application of scalable AI techniques. However, the Army will prioritize submissions addressing the following core need areas for award to maximize impact and scalability across Army AI model development and deployment:

 

  • Automated tools that can identify multiple dimensions of AI Risk, classify AI risk, quantify AI risk, and propose mitigation options that reduce overall risk to the Government deployment of AI systems (including open-source data sets or “black box” models).
  • Automated tools that can accept risk-related inputs from multiple data sources (e.g. model design, model outputs, source code, and data infrastructure) and modalities (e.g. imagery, text, and radio frequency).
  • Automated tools with standardized evaluation methods and mitigation strategies to enable full scalability across the army enterprise.
  • Automated tools that can be used across multiple Army units from the Program Office to end users.

 

Topic 2: Scalable Techniques for Robust Testing and Evaluation (T&E) of AI Operations Pipelines

 

As the Army moves towards maximizing industry advancement for delivery of AI products, solutions, and services, a robust and automated Test & Evaluation (T&E) approach is needed across AI Operations Pipelines.  The ability to assess industry AI products, open-source solutions, and Government-built solutions generated to support AI Operations is critical to keep pace with innovation.  However, there are multiple factors that make building AI operations pipelines in the DoD domain uniquely challenging.  The DoD must operate with data and systems at varying classification levels and network configurations.  Any resulting products or solutions must also comply with stringent rules for obtaining and maintaining an Authority to Operate.  Key metrics may include speed (e.g. task, workflow, efficiency, model latency), accuracy, model size (e.g. number of parameters, processing need, storage), authority of the source, model sensitivity to prompts, the creativity setting allowed for the LLM outputs (e.g. "full factual" to "full imaginative"), effectiveness of Retrieval-Augmented Generation, and other configuration factors that impact performance.

 

The Army will accept proposals on any T&E challenge requiring the application of scalable AI techniques. However, the Army will prioritize submissions addressing the following 3 need areas for award to maximize impact and scalability across Army AI model development and deployment.

 

  • Data Integrity: It’s essential to carefully curate and maintain training datasets to ensure robust and reliable machine learning models in real-world applications. However, over time, the operational environment can change significantly, making old training data less representative of the current situation and potentially leading to inaccurate model performance. Data drift can manifest in various ways, such as: change in distribution, change in feature relevance, and presence of new classes or outliers.  To address this, the army is interested in:
    • Automated tools to identify and evaluate data integrity inside government training data repositories.

 

  • Data Labeling: Accurate, reliable, and automated data labeling methodologies are critical components of building machine learning models that are capable of performance in real-world scenarios.  To facilitate this capability, the army is interested in:
    • Automated tools to assess the quality, consistency, and accuracy of labels applied to training datasets.

 

  • Model Training: Evaluating model performance is a critical part of the Army’s strategy to deliver trusted AI.  The Army is interested in innovative T&E research related to model training for the following areas:
  • Resource consumption: Compute, storage, and energy resources required for deploying, operating, and maintaining an AI system over its entire lifecycle.
  • Robustness: Tools to assess how well the model performs under various conditions, such as extreme inputs or when data is noisy.
  • Scalability: Tools to evaluate how well the model performs when dealing with large datasets, multiple input/output features, and various data sources.
  • Privacy and Security: Tools to ensure that the AI system adheres to strict privacy regulations and does not leak sensitive information from training or test data.

 

Topic 3: Scalable Techniques for Center-of-Mass and Course-of-Action Analytics for Intelligence Preparation of the Battlefield

 

Visualization of enemy equipment and unit entities on a map is critical for efficient military decision making. Unfortunately, sensors often acquire high volumes of data that bury maps in a “sea of red”, making the display of individual entities burdensome and not easily understandable.  The Army technical problem can be broken down into several areas as it relates to Multi-Domain Operations (MDO).  First, current collection plan generation is performed in a silo approach based on mission objectives.  Often, it is completed through spreadsheets and PowerPoint.  Second, these collection plans are not visible or sharable to entities outside of the unit organizations that create them.  This leads to inefficiencies and decreased timeliness of critical information.  Lastly, collection plans are mostly generated manually.  This requires multiple human generated steps to develop an optimized collection plan and often has no relationship to other collection plans that may have similar objectives.

 

The purpose of this topic is to demonstrate how novel approaches and techniques can address these challenge areas and to develop AI algorithms and prototypes to simplify data visualization.  The army is interested in a Center-of-Mass algorithm that can group organizationally related entities together for display purposes. This algorithm must also be easily transitioned into Program Manager Intelligence Systems and Analytics (PM IS&A) products.  This technology is important for Intel and validating Course of Action. The Center-of-Mass algorithm must understand entity relationships, what units and equipment can be grouped together (tanks and BMPs versus tanks and re-supply vehicles), terrain and hydrology limitations (the center of mass cannot be in the middle of a lake), and what constitutes a certain echelon (three-four tanks is an armor platoon, a tank and three BMPs is a motorized rifle platoon, etc.). The Center-of-Mass algorithm will be used to determine echelon, composition type (armor versus artillery), strength and direction over time.  This can then be compared to a situation template (SITEMP) with time phase lines to perform enemy Course-of-Action (COA) validation.  COA validation can include whether expected avenues of approach and enemy force composition and strength are valid, if NAIs are appropriately placed, and actual versus planned enemy movement rates.

 

A prize competition, xTechScalable AI 2, will be used to identify small business concerns that meet the criteria for award. Winners selected from the xTechScalable AI 2 prize competition will be the only firms eligible to submit a proposal under this topic. All other proposals will not be evaluated. See the full xTechScalable AI 2 prize competition RFI here: https://www.xtech.army.mil/competitions/

 

DESCRIPTION: The U.S. Army would like to invite interested entities to participate in the xTechScalable AI 2 competition, a forum for eligible small businesses across the U.S. to engage with the Department of Defense (DOD), earn prize money, participate in an accelerator program and submit a Phase I or Direct to Phase II Army Small Business Innovation Research (SBIR) proposal. The Assistant Secretary of the Army for Acquisition, Logistics and Technology (ASA(ALT)) is partnering with Program Executive Office Intelligence, Electronic Warfare & Sensors (PEO IEW&S) to deliver the xTechScalable AI 2 competition. The Army recognizes that the DOD must enhance engagements with small businesses by (1) understanding the spectrum of world-class technologies being developed commercially that may benefit the DOD in the artificial intelligence space; (2) integrating the sector of non-traditional innovators into the DOD Science and Technology (S&T) ecosystem; and (3) providing expertise and feedback to accelerate, mature, and transition technologies of interest to the DOD.

 

PHASE I: This topic is for Phase I or Direct to Phase II submission. Department of the Army will accept Phase I proposals for the cost of up to $250,000 for up to a 6-month period of performance and Direct to Phase II proposals for the cost of up to $2,000,000 for an 18-month period of performance.

 

During Phase I, companies will complete a feasibility study that demonstrates the firm’s competitive technical advantage relative to other commercial products (if other products exist) and develop concept plans for how the company’s technology can be applied to Army modernization priority areas. Studies should clearly detail and identify a firm’s technology at both the individual component and system levels, provide supporting literature for technical feasibility, highlight existing performance data, showcase the technology’s application opportunities to a broad base of customers outside the defense space, a market strategy for the commercial space, how the technology directly addresses the Army’s modernization area as well as include a technology development roadmap to demonstrate scientific and engineering viability. At the end of Phase I, the company will be required to provide a concept demonstration of their technology to demonstrate a high probability that continued design and development will result in a Phase II mature product.

 

In order for proposers to submit a Direct to Phase II (DP2) proposal, they must provide the justification documentation to substantiate that the scientific and technical merit and feasibility described above has been met and describes the potential military and/or commercial applications. Documentation should include all relevant information including, but not limited to: technical reports, test data, prototype designs/models, and performance goals/results.

 

PHASE II: Produce prototype solutions that will be easy to operate by a Soldier. These products will be provided to select Army units for further evaluation by the soldiers. In addition, companies will provide a technology transition and commercialization plan for DOD and commercial markets.

 

PHASE III DUAL USE APPLICATIONS: Complete the maturation of the company’s technology developed in Phase II to TRL 6/7 and produce prototypes to support further development and commercialization. The Army will evaluate each product in a realistic field environment and provide small solutions to stakeholders for further evaluation. Based on soldier evaluations in the field, companies will be requested to update the previously delivered prototypes to meet final design configuration.

 

REFERENCES:

  1. https://www.xtech.army.mil/competitions/

 

KEYWORDS: xTech; xTechScalable AI; Artificial Intelligence; Machine Learning; Adversarial AI; Data Collection

US Flag An Official Website of the United States Government