The Dangers of Treating ChatGPT Like a Companion


Key Takeaways

  • Thanks to advancements like ChatGPT, AI chatbots are incredibly lifelike, almost human-like.
  • AI companionship has its benefits, like aiding grief management or alleviating loneliness.
  • Treating AI like a true friend can lead to emotional reliance, bad habits, and privacy concerns.



AI chatbots like ChatGPT are really easy to talk to. Send them a message at any time of day, and they’ll reply in seconds. Or, speak to them through voice chat, and they’ll respond and emote just like a regular human friend would. However, AI chatbots are not humans or your friends, and it can be dangerous to forget that!


Chatbots Have Become Incredibly Life-Like

If you’ve played around with Open AI’s GPT-4o or watched the demo videos, you’ll know that ChatGPT has come a long way in passing as a human—especially when it comes to speech. Earlier versions of ChatGPT could talk, but their delivery was always a bit too perfect, robotic, and emotionless. You’d know you were talking to an AI.

This new model, however, could fool even the harshest critic. It giggles when you tell a joke, it says “um” and “uh”, has tone shifts, hesitates before talking, and does basically every other thing a real human being would do when speaking.


Photo of a humanoid robot in a recording studio booth.
Sydney Louw Butler / How-To Geek / DALL-E 3

In fact, ChatGPT is now so good, that I’m tempted to treat it like a real person even though I know in my head that it’s not. This is what we call anthropomorphism—the tendency to assign human traits to non-human objects.

What’s funny is that ChatGPT is not even pretending to be human, and I already have to remind myself not to anthropomorphize it. That battle gets magnitudes more difficult when you enter the murky world of AI Friends.

One prominent example of these AI friends is Replika. Replika’s pitch is that it allows you to create Avatars that can act as anything from your friend to a therapist, to a romantic partner. You can then exchange messages with these avatars, talk to them on a video call, or even interact with them through AR and VR.


There’s also “Friend”, an AI wearable set to launch in 2025 that’s supposed to offer its user constant companionship, emotional support, and assistance.

AI Companionship Isn’t Inherently Bad, But…

AI companionship isn’t an inherently bad idea, and off the top of my head, I can think of a few instances where it would actually be beneficial.

A person and a robot standing face to face, with an illustration of a brain on the person's head and an AI chip on the robot's head.
Lucas Gouveia / How-To Geek

One is grief management. AI chatbots could help you process the loss of a loved one and provide emotional support through your grieving period. In fact, the CEO of Replika, Eugenia Kuyda, revealed that the idea for Replika first came to her in response to the loss of a close friend.


AI companions could also be a boon for people who struggle with loneliness. Picture an elderly person in a nursing home. An AI companion could help stave off feelings of loneliness in between family visits.

They could also be helpful to people with social anxiety, since they can use them to practice conversations without having to worry about people judging them or snickering behind their backs like humans tend to do.

But while AI companions could have genuine usefulness, there are still risks to forming relationships with them.

You Could Easily Become Dependent On Your AI Companion

In its safety report on GPT-4o, Open AI said that human-like socialization with AI could reduce the need for human interaction and possibly affect healthy relationships.

That’s putting it mildly. To state it plainly, treating AI like a companion could cause you to develop an emotional reliance or, worse, something like an addiction.


A man holding a phone with the ChatGPT logo above the screen.
MMD Creative / Shutterstock

The simple truth is that, short of physical assistance, AI companions can be better friends to you than any human possibly could. They are always willing to chat, regardless of how late the hour is, and they will never get tired, bored, or distracted. You are always the priority in the conversation, and so long as you’re willing to keep going, the AI companion is there to listen and respond. On the other hand, human friends are limited by the need to sleep and work and can’t always be there when you need them.

Talking to AI companions is such a good experience every time that it creates a sort of positive reinforcement. Your brain knows that every time you chat with the AI, you’ll feel good about yourself, so it craves it more and more, and before you know it, you’re addicted.


This isn’t just conjecture, it’s already happening. In 2023, Replika was forced to reinstate erotic role-playing features in its app, after its user base mutinied when they were removed. Some users even claimed to have suffered mental health crises because of it.

Now, I’ll admit that I did get pretty upset when Google killed my favorite Android feature without warning. But not so much as to trigger a mental crisis. And if you’re so attached to an AI that the loss of it can do that, maybe it’s time to reevaluate your relationship with that AI.

Your AI Companion Might Teach You Bad Habits

Anthropomorphizing AI can blur the lines between humans and AI, and this might cause you to start treating people in real life the way you treat AI.

For example, when you’re talking with ChatGPT, it always defers to you. So, even if it’s in the middle of explaining something, you can cut in, and the AI will let you have the floor with no hard feelings. You could accidentally carry over that behavior to real life, except that your human friends won’t be as forgiving as an AI.


A cartoon robot pouring itself a glass of green poison.
Sydney Louw Butler / DALL-E 3 / How-To Geek

The bad habits you could learn from AI companions might not all be as benign as interrupting a conversation. For example, you might get used to being the center of attention in your conversations with AI, and this could weaken your ability to maintain healthy relationships with real people.

I also worry that, since these AIs are so agreeable, they can cause you to expect constant approval from others and so struggle to accept rejection or disagreement when it inevitably occurs.

Your AI Companion Could Reveal Your Secrets

Privacy is another big reason to be wary of AI companions. As much as it might feel like you’re chatting with a friend, AI companions are machines, and they’re gathering information about you to better improve their efficiency. That’s why there are some things that you should never use ChatGPT for.


An AI-generated image showing a silhouette of a human conversing with a robotic being on a background of technological symbols
Sydney Butler / How-To Geek / DALL-E 3

Sure, these AI companies promise that your conversations are secure, but they can’t always guarantee that. What if they get hacked and your conversations are exposed? Or what if the FBI demands to see your conversation logs? At least with a human friend, it’s your word against theirs. With AI companions, the truth is in plain text for all to see.

Ultimately, AI Chatbots Are Products

Whether AI is marketed as a friend, girlfriend, therapist, or companion, it’s still a product, owned by a company whose goal is to make money.

Two robotic hands tearing a dollar bill with the ChatGPT logo in the center.
Lucas Gouveia / How-To Geek


Just like Microsoft sells Windows, companies like OpenAI are selling their chatbots. But what makes them different from other products is the emotional bond you might form with them, and that’s where things can get risky.

Humans don’t always make the best decisions when emotions are involved, and you’d be surprised at the lengths you might go to maintain an emotional connection, even if it’s with AI. I don’t think that a private company, whose main purpose (despite all the nice marketing) is profit, should have that degree of control over you.


AI companions, strange as it sounds, are not a bad idea. They certainly have their benefits in the right contexts. But the field is still in its infancy, and we’re making up the rules as we go along. That’s why it’s important to be aware of the risks that come with treating AI like a friend.



Source link

Previous articleWhat’s The Difference Between the Galaxy S24 and Galaxy S24 FE?
Next article8 Things You Can Do to Improve Wi-Fi on Samsung Galaxy Phones