Spektrum Iran

Konstruktion von Geschlecht im Anthropomorphisieren Generativer KI: Ein Zusammenspiel von Gesellschaft und Technologie

Art des Dokuments : Original Research Papers

Autor

Assistenzprofessor, Abteilung für Kulturmanagement und Medienmanagement, Fakultät für Management, Wissenschafts- und Forschungszweig, Islamische Azad-Universität, Teheran, Iran

10.22034/spektrum.2026.566965.1057
Abstrakt
Menschen anthropomorphisieren computerisierte Entitäten wie die Generative Künstliche Intelligenz (GAI), indem sie ihnen menschenähnliche physische Merkmale, mentale Zustände oder soziale Eigenschaften zuschreiben, einschließlich Geschlecht. GAI reflektierte als soziotechnischer Akteur sowohl die Gesellschaft, die sie hervorbrachte, als auch formte diese. Entsprechend waren die Schnittstellen von GAI und Geschlecht wechselseitig ko-konstitutiv. Geschlecht war in KI-Technologien eingebettet, wurde reproduziert, vollzogen, materialisiert und verkörpert. Die vorliegende Studie untersuchte Anthropomorphisierung und die Vergeschlechtlichung von GAI aus einer sozialkonstruktivistischen Perspektive und analysierte, wie Individuen bei der Anthropomorphisierung von GAI (un)bewusst stereotype geschlechtliche Erwartungen übernahmen. In dieser Studie wurde ein eingebettetes Mixed-Methods-Design eingesetzt, bei dem quantitative Daten in einen überwiegend qualitativ ausgerichteten Forschungsansatz integriert wurden. Qualitative und quantitative Daten wurden simultan mittels gezielter Gelegenheitsstichprobe erhoben; 67 iranische Teilnehmende füllten den Online-Fragebogen aus. Die Studie begann mit einer autoethnografischen Vignette. Der quantitative Teil folgte der Logik der Q-Methodologie, wobei Teilnehmende als Variablen behandelt wurden, um unterscheidende Items zu identifizieren. Die qualitativen Daten wurden mithilfe der thematischen Analyse ausgewertet. Mehr als die Hälfte der Teilnehmenden wies GAI weder ein Geschlecht noch einen Namen zu, während etwa die Hälfte der verbleibenden Teilnehmenden ein variables Geschlecht (männlich, weiblich oder geschlechtslos) zuwies und die übrigen Teilnehmenden ein festes, überwiegend männliches Geschlecht attribuierten. Viele Teilnehmende anthropomorphisierten GAI nicht und betonten seinen maschinellen Charakter, während die Antworten anderer Teilnehmender zeigten, dass menschenähnliche Bindungen, Geschlechtszuschreibungen, Benennungspraktiken sowie die Art und Weise, wie diese anthropomorphen Praktiken durch die Nutzung von GAI geprägt wurden, breitere kulturelle Normen widerspiegelten. Dies deutete darauf hin, dass wahrgenommenes Geschlecht in GAI sozial hervorgebracht und nicht intrinsisch war. Da emotionale Bindungen zu zunehmend humanisierten GAI-Chatbots sowohl negative als auch positive Folgen haben können, ist eine Förderung der GAI-Kompetenz erforderlich. Es wird empfohlen, dass politische Entscheidungsträger und Bildungseinrichtungen Maßnahmen zur Stärkung der GAI-Kompetenz entwickeln und dass GAI-Unternehmen Formen der Selbstregulierung einführen, um Nutzer zu schützen.

Schlüsselwörter

Hauptthemen


Abdalla, M. M., Oliveira, L. G. L., Azevedo, C. E. F., & Gonzalez, R. K. (2018). Quality in Qualitative Organizational Research: types of triangulation as a methodological alternative. Administração: Ensino E Pesquisa, 19(1), 66–98. https://doi.org/10.13058/raep.2018.v19n1.578
Ahmad, M., & Wilkins, S. (2025). Purposive sampling in qualitative research: A framework for the entire journey. Quality & Quantity, 59, 1461–1479. https://doi.org/10.1007/s11135-024-02022-5
Airenti, G. (2018). The development of anthropomorphism in interaction: Intersubjectivity, imagination, and theory of mind. Frontiers in psychology, 9, 2136. https://doi.org/10.3389/fpsyg.2018.02136
Andrews, T. (2012). What is social constructionism? Grounded Theory Review, 11(1), 39-46. https://groundedtheoryreview.org/index.php/gtr/article/view/153
Azaria, A. (2023). ChatGPT: More Human-Like Than Computer-Like, but Not Necessarily in a Good Way. 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), Atlanta, GA, USA.
Baek, T. H., Kim, H., & Kim, J. (2025). AI-generated recommendations: Roles of language style, perceived AI human-likeness, and recommendation agent. International Journal of Hospitality Management, 126, 104106. https://doi.org/10.1016/j.ijhm.2025.104106
Bell, G., Broad, E., Martin, B., O'Brien, E., Parsons, J., & Zafiroglu, A. (2021). Gender and Artificial Intelligence. In H. Callan & S. Coleman (Eds.), The International Encyclopedia of Anthropology (pp. 1-11). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118924396.wbiea2458
Berkowitz, D., Manohar, N. N., & Tinkler, J. E. (2010). Walk like a man, talk like a woman: Teaching the social construction of gender. Teaching Sociology, 38(2), 132-143. https://doi.org/10.1177/0092055X10364015
Boero, M. (2024). Re-thinking the Concept of Care in the Era of AI. In R. Piccolo (Ed.), Intelligenza Artificiale e Sanità Digitale (pp. 78-89). Il Sileno Edizioni.
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2025). Understanding model power in social AI. AI & SOCIETY, 40, 2839–2849. https://doi.org/10.1007/s00146-024-02053-4
Braun, V., & Clarke, V. (2012). Thematic analysis: A practical guide. In H. Cooper (Ed.), APA Handbook of Research Methods in Psychology (Vol. 2, pp. 57-71). American Psychological Association. https://doi.org/10.1037/13620-004
Burtell, M., & Woodside, T. (2023). Artificial influence: An analysis of AI-driven persuasion https://arxiv.org/abs/2303.08721
Chen, Q., Jing, Y., Gong, Y., & Tan, J. (2025). Will users fall in love with ChatGPT? A perspective from the triangular theory of love. Journal of Business Research, 186, 114982. https://doi.org/10.1016/j.jbusres.2024.114982
Christoforakos, L., & Diefenbach, S. (2023). Technology as a Social Companion? An Exploration of Individual and Product-Related Factors of Anthropomorphism. Social Science Computer Review, 41(3), 1039-1062. https://doi.org/10.1177/08944393211065867
Costa, P. (2018). Conversing with personal digital assistants: On gender and artificial intelligence. Journal of Science and Technology of the Arts, 10(3), 59-72. https://doi.org/10.7559/citarj.v10i3.563
Costa, P., & Ribas, L. (2019). AI becomes her: Discussing gender and artificial intelligence. Technoetic Arts: A Journal of Speculative Research, 17(1-2), 171-193. https://doi.org/10.1386/tear_00014_1
Cotton, B. R. (2025). Generating Gender: An Analysis of the Social Construction of Gender in AI-Generated Images Florida State University].
Creswell, J. W., & Creswell, J. D. (2023). RESEARCH DESIGN: Qualitative, Quantitative, and Mixed Methods Approaches (Sixth ed.). SAGE Publications, Inc.
Creswell, J. W., Plano Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 209–240). Sage.
D’Agostino, M., Dardanoni, V., & Ricci, R. G. (2017). How to standardize (if you must). Scientometrics, 113, 825-843. https://doi.org/10.1007/s11192-017-2495-7
Devlin, K. (2024). Relating with Social Robots: Issues of Sex, Love, Intimacy, Emotion, Attachment, and Companionship. In A. Edwards & L. Fortunati (Eds.), The DeGruyter Handbook of Robots in Society and Culture (pp. 277-294). DeGruyter.
Duan, W., McNeese, N., & Li, L. (2025). Gender Stereotypes toward Non-gendered Generative AI: The Role of Gendered Expertise and Gendered Linguistic Cues. Proceedings of the ACM on Human-Computer Interaction, 9(1), 1-35. https://doi.org/10.1145/3701197
Eisenchlas, S. A. (2013). Gender Roles and Expectations: Any Changes Online? Sage Open, 3(4). https://doi.org/10.1177/2158244013506446
Evteeva, M., Burges, L., & Gelabert, T. S. (2024). Internalized misogyny: The patriarchy inside our heads. Journal of Integrated Social Sciences, 14(1), 82-108.
Festerling, J., & Siraj, I. (2022). Anthropomorphizing Technology: A Conceptual Review of Anthropomorphism Research and How it Relates to Children's Engagements with Digital Voice Assistants. Integrative Psychological and Behavioral Science, 56(3), 709-738. https://doi.org/10.1007/s12124-021-09668-y
Gamage, K. A., Dehideniya, S. C., Xu, Z., & Tang, X. (2023). ChatGPT and higher education assessments: More opportunities than concerns? Journal of Applied Learning and Teaching, 6(2), 358-369. https://doi.org/10.37074/jalt.2023.6.2.32
Guan, H., Jamieson, J., Gao, G., & Yamashita, N. (2025). Unpacking Negative Feelings and Perceptual Gaps About Social Interactions with Conversational AI.Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems CHI EA ’25,
Han, J., Lee, J., & Ham, J. (2025). Learner perception changes of ChatGPT over ten days: Focusing on psychological anthropomorphism, privacy concerns and trust. Jahr–European Journal of Bioethics, 16(1), 183-205. https://doi.org/10.21860/j.16.1.12
Howe, P. D. L., Fay, N., Saletta, M., & Hovy, E. (2023). ChatGPT’s advice is perceived as better than that of professional advice columnists. Frontiers in psychology, 14, 1281255. https://doi.org/10.3389/fpsyg.2023.1281255
Kelecha, Y. T., Ayele, A. A., Goda, H. S., Demissie, M. H., & Toma, T. M. (2024). Inequitable gender norms and its associated factors among university students in southern Ethiopia: a cross-sectional study, 2022. Frontiers in Public Health, 12, 1462782. https://doi.org/10.3389/fpubh.2024.1462782
Kim, H., Lee, S. W., & Seo, S. (2024). Strategies for Addressing Hallucinations in Generative AI: Exploring the Roles of Politeness.New bottles for new wine: digital transformation demands new policies and strategies 24th Biennial Conference of the International Telecommunications Society (ITS), Seoul, Korea,.
Korstjens, I., & Moser, A. (2018). Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. European Journal of General Practice, 24(1), 120–124. https://doi.org/10.1080/13814788.2017.1375092
Leavy, P. (2023). Research Design: Quantitative, Qualitative, Mixed Methods, Arts-Based, and Community-Based Participatory Research Approaches (SECOND ed.). THE GUILFORD PRESS.
Lee, W. J., Lee, H. S., & Cha, M. K. (2023). AI Like ChatGPT, Users Like Us: How ChatGPT Drivers and AI Efficacy Affect Consumer Behaviour. Virtual Economics, 6(4), 44–59. https://doi.org/10.34021/ve.2023.06.04(3)
Li , A. K. C., Rauf, I. A., & Keshavjee, K. (2025). Knowledge is not all you need for comfort in use of AI in healthcare. Public Health, 238, 254-259. https://doi.org/10.1016/j.puhe.2024.11.019
Li, Y., Chen, L., & Fu, L. (2025). Vicarious interaction in online health consultation service: the effects of generative aI’s anthropomorphism and social support on intended responses through social presence and source credibility. International Journal of Human–Computer Interaction, 41(17), 11209-11226. https://doi.org/10.1080/10447318.2024.2441422
Lin, X., Wang, T., & Sheng, F. (2025). Exploring the dual effect of trust in GAI on employees’ exploitative and exploratory innovation. Humanities and Social Sciences Communications, 12(1), 1-14. https://doi.org/10.1057/s41599-025-04956-z
Lin, Z., & Ng, Y. L. (2025). Unraveling Gratifications, Concerns, and Acceptance of Generative Artificial Intelligence. International Journal of Human–Computer Interaction, 41(17), 10725-10742. https://doi.org/10.1080/10447318.2024.2436749
Liu, J. (2024). ChatGPT: perspectives from human-computer interaction and psychology. Frontiers in Artificial Intelligence, 7, 1418869. https://doi.org/10.3389/frai.2024.1418869
Ma, D., Zhang, T., & Saunders, M. (2023). Is ChatGPT humanly irrational? https://doi.org/https://doi.org/10.21203/rs.3.rs-3220513/v1
Manasi, A., Panchanadeswaran, S., Sours, E., & Lee, S. J. (2022). Mirroring the bias: gender and artificial intelligence. Gender, Technology and Development, 26(3), 295-305. https://doi.org/10.1080/09718524.2022.2128254
Mashburn, P., Weuthen, F. A., Otte, N., Krabbe, H., Fernandez, G. M., Kraus, T., & Krabbe, J. (2025). Gender Differences in the Use of ChatGPT as Generative Artificial Intelligence for Clinical Research and Decision-Making in Occupational Medicine. Healthcare 13, 1394. https://doi.org/10.3390/healthcare13121394
Mays, K. K., Lei, Y., Giovanetti, R., & Katz, J. E. (2022). AI as a boss? A national US survey of predispositions governing comfort with expanded AI roles in society. AI & SOCIETY, 37, 1587–1600. https://doi.org/10.1007/s00146-021-01253-6
Meraji Oskouie, S., Mohamadkhani, K., & Soltanifar, M. (2026). Cyber-Acculturation Through Social Media Exposure: A Q Methodology and Network Analysis. Journal of Cyberspace Studies, 10(1), 303-333. https://doi.org/10.22059/jcss.2025.403148.1182
Meraji Oskuie, S., Abbaspour, A., Delavar, A., & Toloie Eshlaghy, A. (2025). Name it to tame it: A Thematic synthesis of sexual academic transgressions among faculty members. Journal of Higher Education Policy and Leadership Studies, 6(1), 35-69. https://doi.org/10.61186/johepal.6.1.35
Meraji Oskuie, S., Mohamadkhani, K., Delavar, A., & Farhangi, A. A. (2023). Self-Control and Cybercultural Transgressions: How Social Media Users Differ. Journal of Cyberspace Studies, 7(1), 81-104. https://doi.org/10.22059/JCSS.2023.350499.1081
Mommersteeg, P. M., van Valkengoed, I., Lodder, P., Juster, R. P., & Kupper, N. (2024). Gender roles and gender norms associated with psychological distress in women and men among the Dutch general population. Journal of Health Psychology, 29(8), 797-810. https://doi.org/10.1177/13591053231207294
Palacios Barea, M. A., Boeren, D., & Ferreira Goncalves, J. F. (2025). At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3. AI & SOCIETY, 40(2), 461–479. https://doi.org/10.1007/s00146-023-01804-z
Peytchinska, E. (2025). Artificially Generated Friendships? On the Possibility of Co-creation in the Age of Generative AI. Medienimpulse, 63(2). https://doi.org/10.21243/mi-02-25-05
Phang, J., Lampe, M., Ahmad, L., Agarwal, S., Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., & Maes, P. (2025). Investigating affective use and emotional well-being on ChatGPT (arXiv:2504.03888) https://arxiv.org/pdf/2504.03888
Pitard, J. (2016). Using Vignettes Within Autoethnography to Explore Layers of Cross-Cultural Awareness as a Teacher. Forum Qualitative Sozialforschung, 17(1). https://doi.org/https://doi.org/10.17169/fqs-17.1.2393
Plano Clark, V. L., & Creswell, J. W. (2015). Understanding Research: A Consumer’s Guide (2nd ed.). Pearson Education, Inc.
Roselli, C., Lapomarda, L., & Datteri, E. (2025). How culture modulates anthropomorphism in Human-Robot Interaction: A review. Acta Psychologica, 255, 104871. https://doi.org/10.1016/j.actpsy.2025.104871
Russo, C., Romano, L., Clemente, D., Iacovone, L., Gladwin, T. E., & Panno, A. (2025). Gender differences in artificial intelligence: the role of artificial intelligence anxiety. Frontiers in psychology, 16, 1559457. https://doi.org/10.3389/fpsyg.2025.1559457
Sabbar, S., & Habib Zadeh Khiyaban, S. (2023). Algorithms of displacement: Emotional and rhetorical responses to ai-driven job loss in digital public discourse. International Journal of Advanced Multidisciplinary Research and Studies, 3(4), 1324-1331.
Salehi, K., Habib Zadeh Khiyaban, S. and Sabbar, S. (2025). Artificial Intelligence and the Future of International Law and Power. Journal of World Sociopolitical Studies, 9(4), 923-958. doi: 10.22059/wsps.2025.401951.1552
Seymour, W., Van Kleek, M., Binns, R., & Murray-Rust, D. (2022). Respect as a Lens for the Design of AI Systems.Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society AIES 2022,
Skjuve, M., Brandtzaeg, P. B., & Følstad, A. (2024). Why do people use ChatGPT? Exploring user motivations for generative conversational AI. First Monday. https://doi.org/10.5210/fm.v29i1.13541
Smith, M. G., Bradbury, T. N., & Karney, B. R. (2025). Can generative AI chatbots emulate human connection? A relationship science perspective. Perspectives on Psychological Science. https://doi.org/10.1177/17456916251351306
Spielmann, J. (2022). Preference for gender stereotypicality in artificial intelligence University of Illinois at Urbana-Champaign].
Su, J., Ng, D. T. K., & Chu, S. K. W. (2023). Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Computers and Education: Artificial Intelligence, 4, 100124. https://doi.org/10.1016/j.caeai.2023.100124
Sutko, D. M. (2020). Theorizing femininity in artificial intelligence: a framework for undoing technology’s gender troubles. Cultural Studies, 34(4), 567-592. https://doi.org/10.1080/09502386.2019.1671469
Szczuka, J., Mühl, L., Ebner, P., & Dubé, S. (2025). 10 Questions to Fall in Love with ChatGPT: An Experimental Study on Interpersonal Closeness with Large Language Models (LLMs) (arXiv:2504.13860) https://arxiv.org/pdf/2504.13860
Trottier, D., Laviolette, V., Tuzi, I., & Benbouriche, M. (2025). The Effect of Gender Role Expectations, Sexism, and Rape Myth Acceptance on the Social Perception of Sexual Violence: A Meta-Analysis. Trauma, Violence, & Abuse, 15248380251343190. https://doi.org/10.1177/15248380251343190
Valsecchi, G., Iacoviello, V., Berent, J., Borinca, I., & Falomir-Pichastor, J. M. (2023). Men’s Gender Norms and Gender‑Hierarchy‑Legitimizing Ideologies: The Effect of Priming Traditional Masculinity Versus a Feminization of Men’s Norms. Gender Issues, 40, 145–167. https://doi.org/10.1007/s12147-022-09308-8
van Es, K., & Nguyen, D. (2025). “Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI. AI & SOCIETY, 40, 3591–3603. https://doi.org/10.1007/s00146-024-02108-6
Villanueva-Moya, L., & Expósito, F. (2023). Are gender roles associated with well-being indicators? The role of femininity, fear of negative evaluation, and regret in decision-making in a spanish sample. Current Psychology, 42, 20790–20803. https://doi.org/10.1007/s12144-022-03142-7
Wang, Y. R., Duan, J., Talia, S., & Zhu, H. (2023). A Study of Comfortability between Interactive AI and Human (arXiv:2302.14360) https://arxiv.org/pdf/2302.14360
Welivita, A., & Pu, P. (2024). Is ChatGPT more empathetic than humans? (2403.05572) https://arxiv.org/pdf/2403.05572
Wong, J., & Kim, J. (2023). Chatgpt is more likely to be perceived as male than female. arXiv preprint arXiv:2305.12564. https://doi.org/10.48550/arXiv.2305.12564
Yang, F., & Oshio, A. (2025). Using attachment theory to conceptualize and measure the experiences in human-AI relationships. Current Psychology, 44, 10658–10669. https://doi.org/10.1007/s12144-025-07917-6
Zabala, A., Sandbrook, C., & Mukherjee, N. (2018). When and how to use Q methodology to understand perspectives in conservation research. Conservation Biology, 32(5), 1185-1194. https://doi.org/https://doi.org/10.1111/cobi.13123
Zhang, Q., Yang, X. J., & Robert Jr, L. P. (2025). Artificial intelligence voice gender, gender role congruity, and trust in automated vehicles. Scientific Reports, 15, 16364. https://doi.org/10.1038/s41598-025-00884-9

Artikel in der Presse, Akzeptiertes Manuskript
Online verfügbar unter 23 February 2026