We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.HC

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Human-Computer Interaction

Title: What Motivates People to Trust 'AI' Systems?

Authors: Nanna Inie
Abstract: Companies, organizations, and governments across the world are eager to employ so-called 'AI' (artificial intelligence) technology in a broad range of different products and systems. The promise of this cause c\'el\`ebre is that the technologies offer increased automation, efficiency, and productivity - meanwhile, critics sound warnings of illusions of objectivity, pollution of our information ecosystems, and reproduction of biases and discriminatory outcomes. This paper explores patterns of motivation in the general population for trusting (or distrusting) 'AI' systems. Based on a survey with more than 450 respondents from more than 30 different countries (and about 3000 open text answers), this paper presents a qualitative analysis of current opinions and thoughts about 'AI' technology, focusing on reasons for trusting such systems. The different reasons are synthesized into four rationales (lines of reasoning): the Human favoritism rationale, the Black box rationale, the OPSEC rationale, and the 'Wicked world, tame computers' rationale. These rationales provide insights into human motivation for trusting 'AI' which could be relevant for developers and designers of such systems, as well as for scholars developing measures of trust in technological systems.
Subjects: Human-Computer Interaction (cs.HC)
Cite as: arXiv:2403.05957 [cs.HC]
  (or arXiv:2403.05957v1 [cs.HC] for this version)

Submission history

From: Nanna Inie [view email]
[v1] Sat, 9 Mar 2024 16:39:37 GMT (604kb)

Link back to: arXiv, form interface, contact.