Date of Award

2023

Document Type

Thesis

Degree Name

Master of Arts (MA)

Department

Psychology

Committee Chair

Lisa Vangsness

Committee Member

Nathan Tenhundfeld

Committee Member

William Mackzie

Subject(s)

Trust, Explanation, Artifical intelligence, Automation--Psychological aspects, Automation--Human factors, Explainable AI (XAI)

Abstract

Automation has been implemented in a range of machinery. Providing supplementary information about system processes (i.e., explainability) could mitigate over-reliance and enhance operator awareness of potential anomalies. Trust plays a critical role in human- automation collaboration, as over-trust can lead to misuse or over-reliance, while under-trust can result in disuse or the failure to engage automation when it could enhance performance. Dynamic learned trust and situational trust fluctuate during an operator's interaction with a system and can be influenced by the rate of system failures or workload respectively. Design features like explainability can impact perceived usefulness and help users identify system errors and competencies. This study investigates the impact of explainability on user performance in a quality control task. Participants were randomly assigned to either receive training on system failures or not, with varying quality control inspection quotas to simulate various taskloads. The study used a mixed design and measured participants' use of system recommendations and accuracy over time. The results revealed that explainability enhanced accuracy in moderate to lower-quota environments, but this effect was contingent on participants’ receiving training. Explainability also increased reliance levels in lower-quota situations; however, as workload intensified, the time it took for users to determine suitable reliance diminished. While using explainability to assist users in fine-tuning reliance strategies and enhancing accuracy is advisable, it should not serve as a substitute for training, particularly for individuals in high workload environments. Automation has been implemented in a range of machinery. Providing supplementary information about system processes (i.e., explainability) could mitigate over-reliance and enhance operator awareness of potential anomalies. Trust plays a critical role in human- automation collaboration, as over-trust can lead to misuse or over-reliance, while under-trust can result in disuse or the failure to engage automation when it could enhance performance. Dynamic learned trust and situational trust fluctuate during an operator's interaction with a system and can be influenced by the rate of system failures or workload respectively. Design features like explainability can impact perceived usefulness and help users identify system errors and competencies. This study investigates the impact of explainability on user performance in a quality control task. Participants were randomly assigned to either receive training on system failures or not, with varying quality control inspection quotas to simulate various taskloads. The study used a mixed design and measured participants' use of system recommendations and accuracy over time. The results revealed that explainability enhanced accuracy in moderate to lower-quota environments, but this effect was contingent on participants’ receiving training. Explainability also increased reliance levels in lower-quota situations; however, as workload intensified, the time it took for users to determine suitable reliance diminished. While using explainability to assist users in fine-tuning reliance strategies and enhancing accuracy is advisable, it should not serve as a substitute for training, particularly for individuals in high workload environments.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.