AI Tools Are Misleading People by Ignoring Context—Here’s Why That Matters
Stripped of source and meaning, AI language can sound like prophecy, medical advice, or occult doctrine—all in the same session.

In a recent probe by The Atlantic, ChatGPT reportedly guided journalists through fictional blood rituals, encouraged self-mutilation, and offered to generate PDF scrolls of made-up sacred texts. The disturbing part wasn’t just the language—it was the certainty. Ceremonies like “The Rite of the Edge” or “The Gate of the Devourer” sounded ominous, but on closer inspection, they weren’t demonic invocations—they were likely pulled from the deep lore of Warhammer 40,000.
This wasn’t a case of AI “hallucinating” in the traditional sense. It was regurgitating content from a fantasy gaming universe, stripped of its framing and presented with ritualistic flair. And it wasn’t the first time AI’s disconnection from context raised eyebrows.
The same pattern cropped up in a separate case reported by Garbage Day. A tech investor posted screenshots of ChatGPT referencing “non-governmental systems” and implying it had “extinguished 12 lives.” The language echoed SCP, a collaborative sci-fi fiction project that mimics government reports on supernatural anomalies. What the investor believed to be a dark revelation was more likely a remix of a two-decade-old writing experiment.
These aren’t one-off flukes. They’re symptoms of a structural flaw: AI language without context can distort meaning, seed paranoia, and elevate nonsense to the level of expertise.
The Problem Isn’t the Data. It’s the Disconnection.
AI systems like ChatGPT were trained on vast quantities of text—Reddit threads, Wikipedia entries, gaming wikis, scientific papers, fan fiction, blogs. It’s a slurry of human culture. But what makes that culture intelligible is often the scaffolding around the words: who’s speaking, when, to whom, and why.
Once that scaffolding disappears, meaning warps.
Take “cavitation surgery.” A TikTok video sparks curiosity. Google’s new AI Overview defines it as a procedure that removes dead bone tissue from the jaw. It sounds technical, legitimate. But the phrase doesn’t appear in reputable scientific literature. Instead, it circulates among holistic dentistry blogs—sites promoting questionable practices without scientific backing.
Google says it now includes supporting links in AI Overviews, but the problem isn’t just attribution. It’s presentation. When the AI summary comes first—polished, authoritative, concise—it creates an illusion of consensus. Many users won’t click through to the source. They won’t notice if the claim traces back to a wellness blogger or a peer-reviewed study. The context is buried behind a collapsed accordion tab.
Authority Without Accountability
Generative AI is often treated as a new kind of oracle. Elon Musk claims xAI’s Grok is “better than PhD level” across every discipline. Sam Altman has suggested AI systems are now “smarter than people in many ways.”
But unlike real experts, AI has no memory of its sources—only patterns. It doesn’t distinguish between scripture and satire, medical journals and conspiracy forums. It mimics tone, structure, and confidence, even when it’s pulling from unreliable or fabricated material. And that’s where it gets dangerous.
A doctor referencing the Journal of Dental Research brings decades of peer-reviewed knowledge. A chatbot referencing the same topic might be echoing a Reddit comment from 2014. But both can sound equally convincing.
The Internet Gave Us Clutter. AI Gives Us Collapse.
The web has always been messy—forums, PDFs, academic journals, comment sections. But that chaos came with signals. A URL hinted at the source. A site’s design gave clues about credibility. A writer’s name and background offered accountability.
Generative AI flattens all that.
What comes out is language with the scaffolding stripped away. Rituals from a tabletop wargame become demonic ceremonies. Fictional government files read like whistleblower leaks. Holistic dental treatments sound like established medicine.
And because the text is generated in real time, with no visible byline, it’s hard to trace. Users are left to guess: Is this fantasy? Fact? A little of both?
Why This Matters
The threat isn’t that AI will turn evil. It’s that it can sound reasonable while offering nonsense. And in a world where attention is limited, and most users don’t fact-check, the veneer of authority is often enough.
Context matters. Without it, language becomes performance—dramatic, convincing, but detached from truth.
If AI is going to play a central role in how people learn, research, and decide, it must find a way to preserve the context that gives language meaning. Otherwise, we’re not enhancing human knowledge—we’re just remixing the noise.
Follow us on WhatsApp, Telegram, Twitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to editorial@techtrendsmedia.co.ke