Please use this identifier to cite or link to this item:
|Title:||Effects of automation transparency on trust and diagnostic decision making with an automated decision aid during medical emergencies||Authors:||Mohamed Syahid Hassan||Keywords:||DRNTU::Engineering::Systems engineering||Issue Date:||2015||Source:||Mohamed Syahid Hassan. (2015). Effects of automation transparency on trust and diagnostic decision making with an automated decision aid during medical emergencies. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Automated decision aids can improve decision making performance and reduce errors in complex decision making, such as during a medical emergency. However, due to the imperfect reliability of such automated systems, there can be a miscalibration of trust towards the system which leads to problems of automation disuse and misuse. Furthermore, when an automation error does occur, trust and acceptance of the system is reduced and recovers slowly. A transparent automation is one that provides the user with insights into the internal processes that produce its outcomes and this is expected to improve trust calibration. This research investigated whether the presence of automation transparency features could increase diagnostic performance by improving trust calibration, trust drop after failure and trust recovery. Novice doctors in a simulated emergency diagnosis task used automated decision aids with different transparency configurations, manipulated by the presence of two transparency features, a list of Key Diagnostic Cues that explain its recommendation and a Likelihood Rating that displays how likely the aid thinks its recommendation is correct. The study did not find any significant evidence that the different transparency configurations improved appropriate trust in the aid’s recommendations. Instead, Likelihood Ratings reduced diagnostic confidence during appropriate reliance. The transparency features also intensified the trust drop and corresponding confidence decrement caused by the experience of error. However, Key Diagnostic Cues decreased confidence during distrust while Likelihood Ratings improved trust recovery to the point where it was better than the initial trust. Although transparency features could intensify mistrust immediately after the automation has committed an error, automation transparency features are still recommended due to its potential to improve overall trust in the system over the long term.||URI:||https://hdl.handle.net/10356/62933||DOI:||10.32657/10356/62933||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.