Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

Why are so many AI assistants female by default — and should we be worried about that? | TechRadar

How gender became baked into the sound and design of modern AI Discover insights about why are so many ai assistants female by default — and should we be worrie

TechnologyInnovationBest PracticesGuideTutorial
Why are so many AI assistants female by default — and should we be worried about that? | TechRadar
Listen to Article
0:00
0:00
0:00

Why are so many AI assistants female by default — and should we be worried about that? | Tech Radar

Overview

Why are so many AI assistants female by default — and should we be worried about that?

How gender became baked into the sound and design of modern AI

Details

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Open Chat GPT and you can choose between different voices. Some feel obviously feminine, others masculine, and several are more neutral. They also have fairly neutral names too, like Ember, Sol and Juniper.

But it wasn’t always that way. For years, many AI-powered assistants arrived with a default setting: female. Although Chat GPT isn’t exactly the same kind of system as the early voice assistants that first entered our homes, think back to their names: Siri, Alexa, Cortana. Even when they weren’t explicitly gendered, the voices often were.

There might be more flexibility now, but many AI tools, chatbots and assistants still tend to skew female. And as humans, we have a strong tendency to anthropomorphize technology — to project personality, intention and even emotion onto it.

The AI conversation is a mess — and that’s stopping us from making good decisions

Jaron Lanier on how far our empathy should extend to AI

I’ve spent months tracking AI personalities and we’re reading it wrong

But does it really matter? Absolutely. As AI becomes more embedded in daily life, decisions about how it sounds, its personality and its perceived gender, are important.

People don’t just ask these systems for the weather. They confide in them and rely on them for work. Some even abuse them. At the other end of the spectrum, some form deep emotional attachments to them. When conversational AI that can mean so much to us, is designed to sound human, and often specifically feminine, that choice can shape expectations about who serves, who assists and who holds authority.

There isn’t one neat answer. Early voice assistants were developed at a time when much of the available speech data, including customer service recordings and telecommunications archives, was dominated by women’s voices. So that influenced early design and training decisions.

But helping roles were feminized long before they were digitized. Think telephone operators, secretaries and receptionists. Positions associated with assistance and emotional labor were historically performed by women, and those associations have proven to be really durable. Both in how tech companies have designed these products and in what we expect from them.

This is partly why companies have often justified defaulting to a female voice by citing research suggesting people find female voices more pleasant, more trustworthy or easier to engage with. What I find fascinating is that yes, there is research that supports aspects of this, alongside the broader cultural context. But the findings are not definitive. Preferences are shaped by social norms, expectations about authority and care, and ideas about which voices “fit” particular roles in particular contexts.

There’s also a widely repeated claim that humans prefer female voices from infancy. Babies hear their mother’s voice in the womb, the argument goes, so we’re wired to respond positively to female voices.

But Kate Devlin, Professor of AI and Society in the Department of Digital Humanities, King's College London, challenges that narrative. In her book Turned On: Science, Sex and Robots, she writes:

The AI conversation is a mess — and that’s stopping us from making good decisions

Jaron Lanier on how far our empathy should extend to AI

I’ve spent months tracking AI personalities and we’re reading it wrong

“The idea behind this is that babies respond to their mother ’s voice in the womb over all other voices. But isn’t that because, well, they’re inside their mother? I asked my friend, baby scientist Caspar Addyman, if this might be the case. ‘Babies do prefer female voices and faces,’ he told me. ‘But only in the first eight months or so. I’m not aware of any evidence for this beyond that period.’”

In other words, even if early preference exists, it may not explain adult behavior or how our preferences evolve over time.

More recent research further complicates the assumption that users strongly favor female assistants because they’re perceived as more trustworthy. A 2021 study found that while stereotyping can occur with gendered voice assistants, there were no significant differences in trust formed towards a gender-ambiguous voice versus a gendered voice. If trust doesn’t reliably hinge on femininity, the rationale for defaulting to it becomes harder to defend.

Media has played a role too. I’ve written before about how sci-fi influences how we treat AI today and many of our favorite sci-fi stories have long imagined AI in feminized forms. Think seductive operating systems, compliant digital companions, subservient robotic helpers. Male robots and AIs exist, of course, but the archetype of the “helpful female machine” persists.

If these defaults are rooted in older labor roles, inherited stereotypes and research that may no longer hold up across the board, why perpetuate them? Tech is rarely shy about reinvention. If we’re building the future, we could choose to build it differently and more equitably.

This might sound trivial to some people. It’s just a voice and users can change it now anyway, right? But the issue here isn’t just how AI sounds. It’s what it symbolizes, reinforces and the feedback loops it creates.

A 2024 study titled The femininization of AI-powered voice assistants explains: “This bias can manifest in several ways and at different levels, such as training data bias, inclusive design challenges, stereotyped responses that reinforce gender prejudice, female voice default, passive or submissive tone, poor handling of harassment, and insufficient range of diverse voice options.”

Research increasingly suggests that gendered technology doesn’t just mirror stereotypes but it can entrench them, shaping expectations about who serves, who assists and who holds authority.

Today, users have more choice. Many assistants and chatbots still default to a female voice, but male and gender-neutral options are increasingly available. However, at the time of writing, there are still no clear regulatory standards addressing gender stereotyping in AI design.

There’s more at stake than voice settings alone. Expanding genuinely neutral options is one step. Increasing gender diversity within AI development teams is another. Design decisions often reflect who is in the room. And despite the fact that more people than ever are using, and being shaped by, AI systems, women remain underrepresented in AI development. Recent estimates suggest they hold roughly 22–26% of AI-related roles worldwide, and under 15% of senior AI leadership roles.

So maybe more than anything, this is a reminder that technology shapes culture, and culture shapes technology in return. If we want more equitable systems, in AI and beyond, that loop is worth interrupting.

Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.

Becca is a contributor to Tech Radar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to Tech Radar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.

You must confirm your public display name before commenting

1I tested Kodak's cheap and pocketable film camera, and it's packed with retro-futuristic charm

2 Save $455 on the modern HP Omen 16 with a shiny new Intel Core Ultra 7 and an RTX 5060

3 How to watch Man Utd vs Crystal Palace: Live Streams & TV Channels

4I've found 9 stylish and sturdy cases that will protect your shiny new Samsung Galaxy S26 Ultra

5 More Apple Studio Display 2 details are rumored, with two models apparently on the way

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • Why are so many AI assistants female by default — and should we be worried about that

  • How gender became baked into the sound and design of modern AI

  • When you purchase through links on our site, we may earn an affiliate commission

  • Open Chat GPT and you can choose between different voices

  • But it wasn’t always that way

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.