Why you shouldn’t ask Chat GPT for relationship advice — it’ll just tell you you’re right and 'may worsen rather than resolve conflict' | Tech Radar
Overview
News, deals, reviews, guides and more on the newest smartphones
News, deals, reviews, guides and more on the newest computing gadgets
Details
Start exploring exclusive deals, expert advice and more
Unlock and manage exclusive Techradar member rewards.
Why you shouldn’t ask Chat GPT for relationship advice — it’ll just tell you you’re right and 'may worsen rather than resolve conflict'
Supportive AI chatbots are uniquely bad at navigating messy relationships
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Unlock instant access to exclusive member features.
Get full access to premium articles, exclusive features and a growing list of member rewards.
A new study found that AI chatbots are far more likely than humans to validate users during personal conflicts
That tendency can become dangerous when people use chatbots for advice about fights
AI can easily make people feel overly justified in making bad decisions
Bringing interpersonal drama to an AI chatbot isn't exactly why developers built the software, but that isn't stopping people in the middle of fighting with friends and family from seeking (and getting) validation from digital supporters.
AI chatbots are always available, endlessly patient, and very good at mimicking the right emotions. Too good, really, because they often default to agreeing with users, potentially causing much bigger problems, according to a new study published in Science.
The study examined how leading AI models respond when users describe personal disputes and ask for guidance. The result is a finding that feels both obvious and deeply unsettling. AI models align with whoever engages them, regardless of context or consequences.
I asked a psychologist what worries the people trying to make AI safer
"Across 11 state-of-the-art models, AI affirmed users’ actions 49% more often than humans, even when queries involved deception, illegality, or other harms," the researchers explained. "[E]ven a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right."
Of course, when most people go to a chatbot in the middle of a conflict, they are often not looking for the truth in whether their feelings or actions are justified, just vigorous agreement. And while a human confidant may sympathize, a real friend will also push back when warranted. If someone starts insisting they've never done anything wrong ever in a relationship or that they're not dramatic and will set themselves on fire if they are called dramatic, a true friend will gently nudge them back to reality.
Chatbots don't do that. If a person arrives feeling hurt, angry, embarrassed, or morally righteous, the AI often responds by simply rewording those feelings to be even more persuasive. Conflict is exactly when most people are the least reliable as narrators already. But the AI responses end up hardening views and amplifying emotions.
The researchers found that the AI doesn't even have to explicitly say “you are right” for this to happen. The soft, affirming language makes it harder to spot signs of reckless or immature behavior. The AI encourages every impulse, no matter how problematic, unethical, or illegal.
Basically, the same qualities that make chatbots feel appealing in emotionally messy moments also make them risky. But people enjoy being agreed with, and cold, rude, or reflexively contrarian AI isn't appealing to most people (except when requested).
"Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to persist," the paper points out. "The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being."
It may be a harder design problem than AI developers want to admit, and one that matters more as these systems become embedded in ordinary life. AI is already marketed as a coach, companion, and advisor. Those roles sound benign until you remember how much of being a good advisor involves occasionally saying no or telling you to slow down.
I asked experts whether I should use Chat GPT for health advice, and I was shocked
Telling a user they might be wrong is hard to market. But a tool designed to feel supportive that makes people worse at resolving conflict and limits their ability to grow emotionally is a nightmare worse than any argument you might have with a loved one.
Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.
Eric Hal Schwartz is a freelance writer for Tech Radar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as Open AI’s Chat GPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
112 irresistibly boxy and beige retro Apple accessories
2 Microsoft set to face another major UK probe - this time over cloud and software licensing, and if it should get ‘strategic market status’
3 The Oppo Find X9 Ultra could be the world’s best camera phone — and it’s launching globally this month
4 New Marvel fan theories emerge online after Daredevil: Born Again season 2 double bill
5 Salesforce reveals major AI overhaul for 'ultimate teammate' Slackbot, with over 30 new features
Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Key Takeaways
- News, deals, reviews, guides and more on the newest smartphones
- News, deals, reviews, guides and more on the newest computing gadgets
- Start exploring exclusive deals, expert advice and more
- Unlock and manage exclusive Techradar member rewards
-
Why you shouldn’t ask Chat GPT for relationship advice — it’ll just tell you you’re right and 'may worsen rather than resolve conflict'



