Date of Award

2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical and Computer Engineering

Committee Chair

Laurie L. Joiner

Committee Member

Maria Pour

Committee Member

Merv Budge

Committee Member

David Pan

Committee Member

Sivaguru Ravindran

Subject(s)

Drone aircraft, Air defenses, Swarming (Military science), Machine learning

Abstract

Counter air-defense operations in highly contested airspaces pose significant risk to human life and scarce material resources, making it desirable to reduce the exposure of personnel to risk of loss of life and limb. Replacing human-piloted air platforms with a swarm of low-cost, unmanned systems in the contest for air superiority is therefore an area of intense interest. However, no doctrinal or tactical best practices for swarming combat yet exist. This dissertation documents research conducted to develop a systematical framework for discovery of counter-air defense tactics for unmanned aerial vehicles under control of a cognitive agent, using a reinforcement learning approach. Traditionally, counter-air-defense mission effectiveness is achieved through use of weapons having a combination of high quantities, low radar cross section, high speed, low altitude, and/or electronic attack. In the absence of any of these force multipliers, cooperative swarming tactics can be leveraged to achieve mission effectiveness. This domain presents a highly complex state-action space compared to other more constrained rule-based games where artificial intelligence agents have been successful in learning gameplay strategies. The approach taken in this research is to develop highly semantic observation and action functions, interfacing the cognitive agent behavior function to the gameplay environment, which is trained through repeated gameplay. Various designs of observation and action function for a cognitive agent are developed and analyzed and the framework developed is used to facilitate the agent reinforcement learning as well as evaluate mission effectiveness. The proposed framework is shown to be capable of producing highly effective cognitive agents, learning swarm-enabled tactical behaviors maximizing mission effectiveness and leveraging traditional optimizations where non-cognitive agents are unable to do so.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.