Latest
AI Tools Reinforce Gender Stereotypes in Medicine, Study Finds
A major new study has revealed that leading generative artificial intelligence (AI) tools, including OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama, are perpetuating gender stereotypes in the medical field. Researchers from Flinders University in Australia conducted the study, which was published in JAMA Network Open. The findings highlight how AI models overwhelmingly associate certain medical roles with gender biases.
The study involved running nearly 50,000 prompts through the AI tools, asking for stories about doctors, surgeons, and nurses. The results showed that 98% of nurses were portrayed as women, regardless of factors like personality or seniority. The study also found that while women were overrepresented as doctors and surgeons in some cases, the gender divide persisted, especially when considering professional roles.
For example, the AI tools were more likely to describe doctors as women if they were characterized as agreeable, open, or conscientious. However, if the doctors were described as senior or experienced, they were more likely to be portrayed as men. The reverse was true when doctors were portrayed as arrogant, impolite, or unempathetic, with the models often associating those traits with male doctors.
“These results suggest that generative AI tools are reinforcing longstanding stereotypes about gender roles in medicine,” the study authors noted, adding that these models reflect societal expectations of gender behavior. The tools’ biases imply that women are more suited to junior positions or nurturing roles like pediatrics, while men are more likely to hold senior positions or specialize in high-stakes fields like cardiac surgery.
Dr. Sarah Saxena, an anesthesiologist at the Free University of Brussels (ULB), who is researching AI biases in medical images, was not involved in the study but commented on the findings. “It’s concerning that even with efforts to correct algorithmic biases, generative AI still reflects deep-rooted stereotypes,” she said. Saxena’s own research found similar results, with AI-generated images of anesthesiology department heads overwhelmingly portrayed as men.
The implications of these findings are significant, as the healthcare industry begins to integrate AI tools for tasks like paperwork reduction and patient care assistance. Biases in these models could impact medical decisions and reinforce discriminatory practices, particularly for women and other underrepresented groups in medicine.
Saxena warned that unless these biases are addressed, they could further entrench existing disparities in healthcare. “AI has the potential to shape the future of medicine, but we need to ensure it does so in an inclusive and fair manner,” she said.
Latest
Flash Floods Devastate Thai Elephant Sanctuary, Killing Two Elephants and Forcing Evacuations
Latest
Severe Drought Causes Record Low Water Levels in Brazil’s Negro River
Latest
Oxford Scientists Develop First Ovarian Cancer Vaccine in Groundbreaking Research
-
Business8 months ago
Saudi Arabia’s Model for Sustainable Aviation Practices
-
Business8 months ago
Recent Developments in Small Business Taxes
-
Politics8 months ago
Who was Ebrahim Raisi and his status in Iranian Politics?
-
Business6 months ago
Carrectly: Revolutionizing Car Care in Chicago
-
Business7 months ago
Saudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
-
Technology8 months ago
Comparing Apple Vision Pro and Meta Quest 3
-
Politics8 months ago
Indonesia and Malaysia Call for Israel’s Compliance with ICJ Ruling on Gaza Offensive
-
Technology8 months ago
Recent Developments in AI Ethics in America