Abstract
Machine learning systems can help humans to make decisions by providing decision suggestions (i.e., a label for a datapoint). However, individual datapoints do not always provide enough clear evidence to make confident suggestions. Although methods exist that enable systems to identify those datapoints and subsequently abstain from suggesting a label, it remains unclear how users would react to such system behavior. This paper presents first findings from a user study on systems that do or do not abstain from labeling ambiguous datapoints. Our results show that label suggestions on ambiguous datapoints bear a high risk of unconsciously influencing the users’ decisions, even toward incorrect ones. Furthermore, participants perceived a system that abstains from labeling uncertain datapoints as equally competent and trustworthy as a system that delivers label suggestions for all datapoints. Consequently, if abstaining does not impair a system’s credibility, it can be a useful mechanism to increase decision quality.
Original language | English |
---|---|
Title of host publication | Companion Publication of the 2023 ACM Designing Interactive Systems Conference |
Publisher | Association for Computing Machinery |
Pages | 169-172 |
ISBN (Electronic) | 9781450398985 |
DOIs | |
Publication status | Published - 10 Jul 2023 |
Event | ACM Designing Interactive Systems Conference, DIS 2023 - Carnegie Mellon University, Pittsburgh, United States Duration: 10 Jul 2023 → 14 Jul 2023 |
Conference
Conference | ACM Designing Interactive Systems Conference, DIS 2023 |
---|---|
Abbreviated title | DIS 2023 |
Country/Territory | United States |
City | Pittsburgh |
Period | 10/07/23 → 14/07/23 |