TY - JOUR
T1 - Hybrid collective intelligence in a human–AI society
AU - Peeters, Marieke M.M.
AU - van Diggelen, Jurriaan
AU - van den Bosch, Karel
AU - Bronkhorst, Adelbert
AU - Neerincx, Mark A.
AU - Schraagen, Jan Maarten
AU - Raaijmakers, Stephan
N1 - Funding Information:
This material is based upon work supported by the Dutch Ministry of Defence’s exploratory research program (project Human-AI Teaming).
Funding Information:
This material is based upon work supported by the Dutch Ministry of Defence?s exploratory research program (project Human-AI Teaming).
Publisher Copyright:
© 2020, Springer-Verlag London Ltd., part of Springer Nature.
PY - 2021/3/1
Y1 - 2021/3/1
N2 - Within current debates about the future impact of Artificial Intelligence (AI) on human society, roughly three different perspectives can be recognised: (1) the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; (2) the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and (3) the collective intelligence-centric perspective, claiming that true intelligence lies in the collective of intelligent agents, both human and artificial, and that the main threat for humankind is that technological designs create problems at the collective, systemic level that are hard to oversee and control. The current paper offers the following contributions: (a) a clear description for each of the three perspectives, along with their history and background; (b) an analysis and interpretation of current applications of AI in human society according to each of the three perspectives, thereby disentangling miscommunication in the debate concerning threats of AI; and (c) a new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society.
AB - Within current debates about the future impact of Artificial Intelligence (AI) on human society, roughly three different perspectives can be recognised: (1) the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; (2) the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and (3) the collective intelligence-centric perspective, claiming that true intelligence lies in the collective of intelligent agents, both human and artificial, and that the main threat for humankind is that technological designs create problems at the collective, systemic level that are hard to oversee and control. The current paper offers the following contributions: (a) a clear description for each of the three perspectives, along with their history and background; (b) an analysis and interpretation of current applications of AI in human society according to each of the three perspectives, thereby disentangling miscommunication in the debate concerning threats of AI; and (c) a new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society.
KW - Artificial intelligence
KW - Collective intelligence
KW - Human intelligence
KW - Human–AI collaboration
KW - Human–AI society
KW - Hybrid intelligence
KW - n/a OA procedure
UR - http://www.scopus.com/inward/record.url?scp=85087076996&partnerID=8YFLogxK
U2 - 10.1007/s00146-020-01005-y
DO - 10.1007/s00146-020-01005-y
M3 - Article
AN - SCOPUS:85087076996
SN - 0951-5666
VL - 36
SP - 217
EP - 238
JO - AI & society
JF - AI & society
IS - 1
ER -