TY - JOUR
T1 - Now you see me, now you don’t
T2 - An exploration of religious exnomination in DALL-E
AU - Alfano, Mark
AU - Abedin, Ehsan
AU - Reimann, Ritsaart
AU - Ferreira, Marinus
AU - Cheong, Marc
PY - 2024/6
Y1 - 2024/6
N2 - Artificial intelligence (AI) systems are increasingly being used not only to classify and analyze but also to generate images and text. As recent work on the content produced by text and image Generative AIs has shown (e.g., Cheong et al., 2024, Acerbi & Stubbersfield, 2023), there is a risk that harms of representation and bias, already documented in prior AI and natural language processing (NLP) algorithms may also be present in generative models. These harms relate to protected categories such as gender, race, age, and religion. There are several kinds of harms of representation to consider in this context, including stereotyping, lack of recognition, denigration, under-representation, and many others (Crawford in Soundings 41:45–55, 2009; in: Barocas et al., SIGCIS Conference, 2017). Whereas the bulk of researchers’ attention thus far has been given to stereotyping and denigration, in this study we examine ‘exnomination’, as conceived by Roland Barthes (1972), of religious groups. Our case study is DALL-E, a tool that generates images from natural language prompts. Using DALL-E mini, we generate images from generic prompts such as “religious person.” We then examine whether the generated images are recognizably members of a nominated group. Thus, we assess whether the generated images normalize some religions while neglecting others. We hypothesize that Christianity will be recognizably represented more frequently than other religious groups. Our results partially support this hypothesis but introduce further complexities, which we then explore.
AB - Artificial intelligence (AI) systems are increasingly being used not only to classify and analyze but also to generate images and text. As recent work on the content produced by text and image Generative AIs has shown (e.g., Cheong et al., 2024, Acerbi & Stubbersfield, 2023), there is a risk that harms of representation and bias, already documented in prior AI and natural language processing (NLP) algorithms may also be present in generative models. These harms relate to protected categories such as gender, race, age, and religion. There are several kinds of harms of representation to consider in this context, including stereotyping, lack of recognition, denigration, under-representation, and many others (Crawford in Soundings 41:45–55, 2009; in: Barocas et al., SIGCIS Conference, 2017). Whereas the bulk of researchers’ attention thus far has been given to stereotyping and denigration, in this study we examine ‘exnomination’, as conceived by Roland Barthes (1972), of religious groups. Our case study is DALL-E, a tool that generates images from natural language prompts. Using DALL-E mini, we generate images from generic prompts such as “religious person.” We then examine whether the generated images are recognizably members of a nominated group. Thus, we assess whether the generated images normalize some religions while neglecting others. We hypothesize that Christianity will be recognizably represented more frequently than other religious groups. Our results partially support this hypothesis but introduce further complexities, which we then explore.
KW - Artificial intelligence
KW - DALL-E
KW - Exnomination
KW - Natural language generation
KW - Natural language processing
UR - http://www.scopus.com/inward/record.url?scp=85190256581&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/ARC/DP190101507
U2 - 10.1007/s10676-024-09760-y
DO - 10.1007/s10676-024-09760-y
M3 - Article
AN - SCOPUS:85190256581
SN - 1388-1957
VL - 26
JO - Ethics and Information Technology
JF - Ethics and Information Technology
IS - 2
M1 - 27
ER -