Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Computer Science

Committee Chair

Mikel D. Petty

Committee Member

Peter J. Slater

Committee Member

Sampson Gholston

Committee Member

Huaming Zhang

Committee Member

Ramazan S. Aygun


Machine learning, Data mining, Spatial analysis (Statistics), Temporal databases, Reinforcement learning


Events with detrimental effects caused by adversarial human actors, such as crimes and terrorist acts, require the convergence of three factors: a suitable target, a motivated offender, and the absence of a capable guardian. Positioning guardian agents in the spatio-temporal proximity of anticipated events in order to prevent or respond to them can be a difficult task. The most common approaches to this problem include models of the adversarial actors to which guardian agents respond. Such model-based approaches can be problematic when there is insufficient information to construct a reliable model, when the number of factors governing the actor's behavior is too numerous to yield a practical model, or when actor models are inaccurate. Conversely, finding an effective model-free approach is challenging because without an actor model little more than the time and location of events are available for agents to react to. This research introduces novel methods to rectify the insufficiency of time and location of events as the sole information to support model-free reinforcement learning, and to boost learning from the limited experience agents may gain from the event sequence. To offset the dearth of information in an environment consisting solely of the time and location of a sequence of events, we synthetically augment the information in the environment with digital pheromones, and other information augmenters derived from the event sequence, to define informative states. These augmenters may reveal regularities in the timing and location of events that, through reinforcement learning, can be detected and exploited to position agents in spatio-temporal proximity of anticipated events. To improve agent learning from limited experience and to reduce the effects of the environment's non-stationarity, we enhance standard reinforcement learning in two ways. The first, adapted from existing methodologies, does so by allowing agents to learn from actions not taken and from the experiences of others, a concept known as fictive learning. The second approach is a novel learning boosting method that complements synthetic-augmentation and enables learning from partial-state information. The new techniques were applied to two difficult real-world problems, responding to Somali maritime piracy attacks and to business robberies in Denver Colorado, and analyzed for effectiveness. The combination of enhanced reinforcement learning and information augmenters outperformed the historical record, or a domain-representative heuristic, in both applications.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.