Explanation and trust: what to tell the user in security and AI?

Wolter Pieters

    Research output: Book/ReportReportProfessional

    168 Downloads (Pure)

    Abstract

    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities.
    Original languageUndefined
    Place of PublicationEnschede
    PublisherCentre for Telematics and Information Technology (CTIT)
    Number of pages19
    Publication statusPublished - 2010

    Publication series

    NameCTIT Technical Report Series
    PublisherCentre for Telematics and Information Technology, University of Twente
    No.TR-CTIT-10-32
    ISSN (Print)1381-3625

    Keywords

    • Trust
    • SCS-Cybersecurity
    • EWI-18427
    • informed consent
    • Actor-Network Theory
    • METIS-275645
    • Systems theory
    • Expert Systems
    • IR-73127
    • explanation
    • Information Security
    • confidence

    Cite this