Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

I write about AI for a living — what people confessed to me about using ChatGPT surprised me | TechRadar

From decoding dating texts to life decisions, people told me how they really use ChatGPT, and how it makes them feel afterward Discover insights about i write a

TechnologyInnovationBest PracticesGuideTutorial
I write about AI for a living — what people confessed to me about using ChatGPT surprised me | TechRadar
Listen to Article
0:00
0:00
0:00

I write about AI for a living — what people confessed to me about using Chat GPT surprised me | Tech Radar

Overview

From decoding dating texts to life decisions, people told me how they really use Chat GPT, and how it makes them feel afterward

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Details

I write about AI a lot. Over the past year, I’ve covered everything from AI relationships to coaching bots to people using Chat GPT every day at work — which is probably why people confess things to me.

They tell me their company has rolled out AI and no one really knows how to use it. That they relied on it to understand a pregnancy before telling their family. That they’ve used it to decode ambiguous dating texts or to stay calm during arguments. But what fascinates me isn’t just what they’re using AI for. It’s how they feel about it afterward.

Some feel conflicted because of the environmental cost. Others worry they’re being lazy. A few are unsettled by the emotional attachments they’ve developed and are trying to untangle. And some feel sharper, calmer and more capable than ever with a chatbot at their fingertips 24/7.

I tried the viral ‘future self’ Chat GPT prompt and the advice surprised me

I asked a psychologist what worries the people trying to make AI safer

To understand these reactions better, I spoke to Danielle Hass, a Ph D candidate in the Department of Marketing at West Virginia University, who has studied the emotional consequences of using generative AI.

In a 2025 study, her team researched what people are feeling, why those emotions arise, and what might reduce the discomfort. If AI is becoming part of everyday life, we need to understand the psychology of using it, not just the productivity gains.

“It’s the emotionally laden interpersonal messages, or ‘heartfelt messages’,” she explains. “Things like birthday wishes, love letters, wedding vows, and notes of appreciation.”

Drafting a shopping list with AI is unlikely to keep you up at night. Asking it to help with a work email may feel like common sense. But when a message is meant to signal care, effort and emotional investment, that's when things get difficult.

“What we find is that it's not just using AI that creates discomfort,” Hass says. “The guilt comes when messages are sent where the recipient expects genuine personal investment.”

In other words, context matters. “When you use AI to write a birthday card for your best friend, you're in a situation where honesty, authenticity and effort are core to what the message is supposed to signal. That's where the negative feelings really kick in.”

What we lose when AI starts doing all our thinking at work

Chat GPT saved me from a roadside nightmare, and spared my blushes

Hass’s research suggests that the closer the relationship and the more meaningful the occasion, the worse people tend to feel if they rely on AI.

We know people feel bad, but what's the emotion, specifically?

“The primary emotion we identify is guilt,” Hass says. She explains that the distinction really matters here, because guilt is different from embarrassment or shame. It’s not just about how others might see you, it’s about doing something that feels wrong to you.

“Using Gen AI to write a heartfelt message and presenting it as your own creates a sense that you've misrepresented yourself to someone you care about,” she explains. “That violation of your own ethical standards is precisely what triggers guilt.”

If you send a love letter, your partner reasonably assumes you sat down and chose those words. That the phrasing reflects your thought and emotional effort. “But then the actual source of those words is an algorithm,” Hass says.

She describes this as a “source-credit discrepancy” — a mismatch between who appears to have authored the message and who actually did. That discrepancy is what makes the act feel so dishonest.

“It’s not just that the message feels abstractly inauthentic,” she adds. “It’s that you’re creating a false impression of authorship in the mind of someone who trusts you.” That’s where the emotional hangover comes from.

When Hass explained this to me, I couldn’t stop thinking about what someone should do if they’re already stewing in this feeling. Should they admit it? Maybe.

Hass says that transparency would likely reduce guilt because it removes the dishonesty at the core of the discomfort. “If the recipient knows the message came from AI, there's no false impression of authorship, no source-credit discrepancy,” she says. But disclosure doesn’t magically make the situation simple.

Now the dynamic shifts. How does the other person respond? Are they amused? Indifferent? Hurt?

“If they’re fine with it, that acceptance might facilitate a kind of self-forgiveness,” Hass says. “But if they feel let down — that their special occasion didn’t merit your personal effort — that reaction could intensify your guilt, which now comes from hurting someone you care about.”

If you wrote your mum’s birthday card or your wedding vows with AI and aren’t sure what to do, honesty might be the answer. Explaining why you used it, and that you genuinely cared, may help. But this is new emotional territory, and you can’t control how someone else will react.

One obvious option is to draw a hard boundary and avoid using AI for emotionally meaningful communication altogether.

Hass suggests a more practical solution may be to reframe AI’s role in your life. “A more appropriate role for Gen AI might be as a thinking partner rather than a ghostwriter,” she says. In my time reporting on AI, that’s broadly the view I hear from people who take a measured approach to AI tools.

“Using it to brainstorm, overcome writer's block, or refine a draft you've already written yourself preserves your genuine voice and investment in the message,” she says. It’s also what she tells her students: “Give it your best, authentic shot first, and then consider whether Gen AI can help you sharpen it.”

AI can help us articulate things we struggle to say. It can nudge, structure and polish. But when it starts standing in for effort in moments meant to signal care, something shifts internally. (Which seems related to our exploration into what happens when you let AI do your thinking for you at work.)

If you’ve felt that strange emotional hangover after using AI, this proves you’re not imagining it. Understanding that mechanism is a positive step toward using these tools in ways that support us, rather than leaving us unsettled about our closest and most important relationships.

Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.

➡️ Read our full guide to the best business laptops

  1. Best overall: Dell Precision 5690
  2. Best on a budget: Acer Aspire 5
  3. Best Mac Book: Apple Mac Book Pro 14-inch (M4)

Becca is a contributor to Tech Radar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to Tech Radar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.

You must confirm your public display name before commenting

1 Forget the RAM crisis and rising prices - Apple’s 50th anniversary is the perfect time to grab these retro gadgets and celebrate a time when computing was fun

2 How to use AI to make your photos look instantly better

3 Sonos Play review: my new favorite speaker buddy to use everywhere

4 Apple adds i Phone age verification in the UK, as Meta and Google get fined

5 Top-rated De'Longhi coffee makers are going cheap at Amazon right now

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • From decoding dating texts to life decisions, people told me how they really use Chat GPT, and how it makes them feel afterward
  • When you purchase through links on our site, we may earn an affiliate commission
  • I write about AI a lot
  • They tell me their company has rolled out AI and no one really knows how to use it
  • Some feel conflicted because of the environmental cost

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.