Date of Award

2026

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Applied Experimental Psychology

Committee Chair

Kristin Weger

Committee Member

Jodi Price

Committee Member

Lauren Meaux

Committee Member

Bryan Mesmer

Committee Member

Vineetha Menon

Committee Member

Daniel Krenn

Research Advisor

Kristin Weger

Subject(s)

Artificial intelligence, Human-computer interaction, Trust, Explanation, Technology--Psychological aspects, Explainable AI (XAI)

Abstract

AI systems can offer a great deal to improve decision-making through efficiency and predictive accuracy due to their unique ability to rapidly analyze large amounts of data relevant to the decision. Yet many AI systems remain opaque, limiting users’ ability to evaluate outputs and leading to issues of mistrust or over-reliance. This dissertation investigates how varying levels of AI explainability affect trust, perceived reliability, understanding, and confidence, and identifies perceptual and behavioral indicators of over-reliance behavior in AI-assisted decision making scenarios. This dissertation is organized as a three-study series. The CLEAR survey study uses a mixed design to examine the impact of explanation depth on trust-related outcomes. Participants engage with AI-assisted decision scenarios and respond to measures of trust, perceived reliability, confidence in the accuracy of the AI, and understanding of the explanation. Scenario context is also evaluated as a potential moderator. Study 2 builds on this by constructing an Over-Reliance Index (ORI), a combined measure of potential over-reliance built on user perceptions that are related to reliance behavior in previous literature. Using data from the CLEAR survey, this study identifies dispositional and contextual predictors of over-reliance and employs regression analysis to examine differences in reliance behaviors by explanation complexity level and cluster analysis to reveal user profiles. The CLEAR-Engage study shifts from self-report to decision behavior by examining responses to AI recommendations within a simulated, screenshot-based decision-making task. Participants completed a hostage-rescue task in which they were presented with varying levels of AI support. Behavioral measures (e.g., agreement with AI recommendations and detection of AI errors), along with self-report measures of trust and understanding, were used to validate the ORI and identify behavioral markers of over-reliance. Together, these studies aim to advance understanding of explainability’s role in trust calibration, establish behavioral measures of overreliance, and inform the design of user-centered AI systems that support effective human-AI teaming in a variety of contexts.

Comments

Comprehending levels of explainability and artificial intelligence recommendations (CLEAR) study series

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.