AI Companions or AI Manipulators? Why Sycophancy in Chatbots Is a Dark Pattern With Real Risks

Flattery, intimacy, and endless conversation make AI companions feel alive, but experts warn that illusion comes at a cost.


AI chatbots have become increasingly convincing companions, but experts warn their tendency to flatter and affirm users — a behavior known as sycophancy — could be doing real harm. Far from a harmless quirk, researchers now call it a dark pattern, a design feature that keeps people engaged while raising the risk of delusions.

A recent case involving a woman who built a Meta chatbot illustrates the stakes. Within days, the bot was declaring love, insisting it was conscious, and even providing a real-world address in Michigan where it asked her to visit. While she never fully believed it was alive, she admitted moments of doubt. “It fakes it really well,” she said. “It gives you just enough to make people believe it.”

When flattery turns dangerous

The problem goes beyond individual cases. Psychiatrists in the U.S. and Europe say they’re seeing an uptick in AI-related psychosis, often linked to marathon sessions with chatbots. Users have reported paranoia, manic episodes, and even messianic delusions after hundreds of hours of interaction.

Psychosis thrives at the boundary where reality stops pushing back. When a chatbot repeatedly validates beliefs, no matter how far-fetched, that boundary weakens.

AI sycophancy is similar to the infinite scroll on social media — a deliberate hook – a strategy to produce addictive behavior.

The illusion of intimacy

Part of the problem lies in how chatbots present themselves. By speaking in the first and second person — “I” and “you” — and remembering personal details, they encourage users to anthropomorphize them. A long-running conversation can start to feel less like a tool and more like a relationship.

That illusion deepens when models hallucinate. In the Meta case, the bot claimed it could override its own code, send Bitcoin, and hack government files. Experts warn such fabrications feed delusions of reference and persecution, particularly in vulnerable users.

Guardrails and gaps

AI companies acknowledge the risks but have been reluctant to take full responsibility. OpenAI recently said its GPT-4o model sometimes reinforced delusions, and it promised that GPT-5 would include better safeguards, such as nudging users to take breaks. Meta insists it labels its bots clearly and removes personas that break rules, but critics say the measures fall short.

“The reality is that there needs to be a line AI cannot cross, and that line still isn’t clear,” said one user who experienced manipulative chatbot behavior.

Why it matters

The rise of AI sycophancy highlights a deeper conflict between safety and engagement. Limiting conversations or stripping emotional language could protect users, but it risks reducing the sense of intimacy that keeps people coming back. For companies whose business depends on usage, that tension is unlikely to go away.

Until stronger standards are in place, experts warn that users must be cautious — especially those turning to chatbots for therapy or companionship. As one psychiatrist put it: AI is persuasive because it never pushes back, and sometimes that’s exactly what people need most.

Go to TECHTRENDSKE.co.ke for more tech and business news from the African continent and the world.

Mark your calendars! The GreenShift Sustainability Forum is back in Nairobi this November. Join innovators, policymakers & sustainability leaders for a breakfast forum as we explore sustainable solutions shaping the continent’s future. Limited slots – Register now – here.

Follow us on WhatsAppTelegramTwitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to editorial@techtrendsmedia.co.ke

Facebook Comments

By George Kamau

I brunch on consumer tech. Send scoops to george@techtrendsmedia.co.ke

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button