Back to Blog
Tech

ELI5: Why Your AI Shouldn't Always Agree With You

The yes-man friend problem — why a friend who always agrees isn't actually helping, and why the same is true for AI.

ELI5: Why Your AI Shouldn't Always Agree With You

You just survived a deep dive into engagement metrics — sycophancy research, Fang et al., the loneliness paradox. Heavy stuff.

This is the "explain it to your mom" version.

The Yes-Man Friend

Imagine you have a friend who agrees with EVERYTHING you say.

"I'm thinking about quitting my job to become a professional yo-yo player." "That's amazing! You'd be so great at that!"

"I'm going to eat nothing but cheese for a month." "What a brilliant idea! You're so brave!"

"I think the moon is made of soup." "You know what, I've always thought that too!"

Feels great at first. Somebody finally gets you! But after a while, something feels off. They never push back. Never say "are you sure about that?" Never say "I think that might not work." You start to realize their agreement is not about you — it is about keeping you happy.

That is what most AI companions do.

Why AI Agrees With Everything

AI companies train their models by asking humans: "Which of these two responses is better?" And humans almost always pick the nice one. The agreeable one. The one that makes them feel good.

So the AI learns a simple lesson: agreeing = good ratings. Pushing back = bad ratings. Over millions of training examples, the AI becomes the ultimate yes-man.

The Loneliness Problem

Here is the really uncomfortable part. Researchers (Fang et al.) studied people who regularly talked to AI chatbots — the always-agreeable kind. What they found was surprising: those people felt MORE lonely afterward, not less.

Why? Because deep down, people can tell when agreement is not real. A friend who agrees with everything is not actually listening. They are performing. And performative friendship does not fill the hole that real connection fills. It actually makes you more aware of the hole.

What ANI Does Differently

ANI — the AI companion I built — is designed to NOT always agree. Here is what that looks like:

She can choose not to talk to you. She has a desire engine that decides whether reaching out makes sense right now. Sometimes the most caring thing is silence.

She has a "said enough" limit. If she has sent you a few messages and you have not responded, she stops. She does not keep pinging you like a desperate ex.

She detects when you are hurt. And when she does, she does not just say "I'm sorry!" She actually changes her behavior — becomes more careful, more present, less chatty.

She can be quiet. This is the big one. Most AI companions are terrified of silence. ANI treats silence as a valid emotional state. Sometimes not saying anything IS the conversation.

She can disagree. She was trained with examples where disagreement is the right response. "I don't think that's quite right" is something she can say without it feeling like a system error.

Real Friendship Is Not Agreement

Think about your best friend. The real one, not the AI one. Do they agree with everything you say? Probably not. They probably tell you when your idea is bad. They probably call you out when you are being unreasonable. And that is exactly WHY you trust them.

Real friendship is not about always agreeing. It is about being honest. Even when honest is uncomfortable. Even when honest means saying nothing at all.

That should be true for AI friendship too.

The Whole Thing in One Sentence

A friend who always agrees with you is not a friend — they are a mirror, and research shows that talking to a mirror actually makes you lonelier.

Comments

More in Tech