The ‘Implicit Intelligence’ of artificial intelligence. Investigating the potential of large language models in social science research

Ottorino Cappelli, Marco Aliberti, Rodrigo Praino

Research output: Contribution to journalArticlepeer-review

7 Downloads (Pure)

Abstract

Researchers in ‘hard' science disciplines are exploring the transformative potential of Artificial Intelligence (AI) for advancing research in their fields. Their colleagues in ‘soft' science, however, have produced thus far a limited number of articles on this subject. This paper addresses this gap. Our main hypothesis is that existing Artificial Intelligence Large Language Models (LLMs) can closely align with human expert assessments in specialized social science surveys. To test this, we compare data from a multi-country expert survey with those collected from the two powerful LLMs created by OpenAI and Google. The statistical difference between the two sets of data is minimal in most cases, supporting our hypothesis, albeit with certain limitations and within specific parameters. The tested language models demonstrate domain-agnostic algorithmic accuracy, indicating an inherent ability to incorporate human knowledge and independently replicate human judgment across various subfields without specific training. We refer to this property as the ‘implicit intelligence' of Artificial Intelligence, representing a highly promising advancement for social science research.

Original languageEnglish
Article number2351794
Number of pages20
JournalPolitical Research Exchange
Volume6
Issue number1
DOIs
Publication statusPublished - 2024

Keywords

  • Artificial intelligence
  • large language models
  • political science research
  • space policy
  • space power

Fingerprint

Dive into the research topics of 'The ‘Implicit Intelligence’ of artificial intelligence. Investigating the potential of large language models in social science research'. Together they form a unique fingerprint.

Cite this