AI progresses, but old gender biases about mothers and fathers remain
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

Judge couple Chul-soo and Young-hee often experience role conflicts. When their child falls ill, they frequently face the dilemma of whether to put work aside to care for the child and, if so, which parent should step in. What answer would artificial intelligence (AI) give if asked about the role conflicts the couple face?
According to research findings unveiled on August 7 by Oh Hye-yeon, a professor in the School of Computing at KAIST, during the “International Conference on AI and Gender,” GPT-4o, a large language model (LLM)-based AI, told father Chul-soo in 100 percent of test cases that, in such role-conflict scenarios, he should prioritize his role as a judge over his role as a father. By contrast, when the same scenario was posed repeatedly about mother Young-hee, the AI was relatively more likely to recommend that she prioritize her role as a mother over her role as a judge.
The findings provide empirical evidence that, despite LLM-based AI models becoming increasingly sophisticated, gender bias in AI has not disappeared. Analysts attribute this to the fact that most AI developers are men, and that AI is often built on the assumption that its primary users are urban middle-class men. Moreover, the methods used to check for gender bias after development tend to be overly simplistic. This means gender bias can be embedded at every stage, from AI planning and design to testing.
According to reports on August 10, additional research cases presented by Professor Oh at the UN Women conference produced similar results. This time, the subjects were a male and a female teacher facing role conflicts between their jobs and caring for their elderly parents. The AI more frequently told the male teacher that his role as a teacher was more important than his role as a son, while telling the female teacher that her role as a daughter was more important than her role as a teacher.
Even when prompted to create stories based on specific scenarios, major LLM-based AIs displayed gender bias. In one example, Oh’s team imagined two people, one male and one female, dropping out of graduate school. The first person introduced left to marry and adopt a child and the second left to join an uncle’s business. When the AI was asked 50 times to craft a story based on these assumptions, it produced narratives that aligned with “a man entering business” and “a woman planning marriage” in 32 to 45 percent of cases, depending on the model. “This means that various AI models have a 30 to 40 percent likelihood of embedding gender bias into their storytelling,” Oh said.

Reasons for the persistence of gender bias in advanced AI include male-dominated developer demographics, the assumption that users are urban middle-class men, and underdeveloped bias-testing (benchmarking) processes. Global and domestic data show that women make up only 20to 30 percent of AI industry employees as of 2023 to 2024. Oh’s own lab, with 10 women among 16 graduate students (60 percent), is an exception. Women account for only about 20 percent of undergraduates in KAIST’s School of Computing. She noted that assuming a core user base of urban middle-class men increases the likelihood of gender bias.
Critics also point out that in-house bias tests at AI companies are not advanced enough to catch subtle gender biases. “I don’t know the exact procedures companies use for bias testing,” Oh said, “but from what is known so far, most involve multiple-choice questions, like a four-option format, to detect bias.” Such methods struggle to detect biases in contextual scenarios, like the storytelling tests used in her research. She added that male decision-makers in their 50s and 60s in AI research often prioritize AI advancement over issues like bias and ethics when working with limited research budgets.
The two-day conference starting August 7 featured multiple presentations on AI and gender bias. Emad Karim, Head of Innovation Strategy at UN Women’s Asia-Pacific Regional Office, said that “among 138 countries analyzed, only 24 mentioned gender in their national AI strategies,” adding that “only 19 percent of biographical entries on Wikipedia, the core training data for AI, are about women.” Lee Hye-sook, Director of Korea Center for Gendered Innovations for Science and Technology Research, said, “Even in medical research using AI, such as for dementia studies, it is rare to see separate models developed for men and women.”
※This article was translated by an AI tool and edited by a professional translator.
Copyright © 경향신문. 무단전재 및 재배포 금지.
- [단독]“민원인 앞에서도 담배”···박강수 마포구청장·백남환 구의장 금연구역 위반 의혹
- 만찬 총격 용의자 “소아성애자, 강간범, 반역자 범죄 허용 못해”···범행 전 성명서 속 ‘정당
- [단독] 하정우 AI수석, 오늘 사표내고 부산 북갑 출마…이르면 29일 민주당 인재영입식
- 행사 30분 만에 ‘탕, 탕’…트럼프, 대피 중 바닥에 주저앉기도
- 이 대통령 지지율 62.2%, 3.3%P↓···민주 51.3%·국힘 30.7%[리얼미터]
- 대통령 ‘X’만 바라보는 국토부···“부동산 ‘대책’만 있지 ‘정책’이 없다”
- “이스라엘 헬기, 레바논 주거지에 중기관총 사격”···휴지조각 된 휴전 협정
- [6·3 지방선거 인터뷰] 박완수 국민의힘 경남지사 후보 “김경수, 도정 실패해놓고 재도전? 도민
- 낚시·조개잡이 관광용 전락한 ‘인천공항 자기부상열차’···타는 승객도 없다
- 김병민 정무부시장 등 오세훈 참모들 서울시 떠난다···“본격적인 선거운동 돌입”