It starts innocently. A user types in a simple question, “Who won the Best Actor Oscar in 2023?”, and ChatGPT, calm and confident, responds with a name that is… wrong. Not debatable. Not nuanced. Just factually incorrect. That moment of friction, that small but jarring clash between human expectation and machine response, has become the starting point for a growing wave of digital unrest. The problem isn't that AI makes mistakes. It’s that when it does, it says so with an unsettling certainty. And lately, people have started to notice.
We’re now watching one of the internet’s most powerful tools face a crisis of trust.
Over the past few weeks, social media has become a war room for AI users frustrated with ChatGPT and its siblings. The viral post that sparked the latest outrage read simply: “Why is ChatGPT lying to me?”, a desperate, almost comical question, yet one rooted in a profound concern. It has since opened the floodgates for a debate that’s no longer just about factual errors. It’s about the future of digital truth, the manipulation of narrative, and the ethical blind spots of AI development.
Related article - Uphorial Podcast
But beneath this storm lies a deeper question, one that demands we shift our focus from the chatbot to the culture that created it.
ChatGPT doesn’t exist in a vacuum. It was trained by humans, guided by datasets shaped by human choices, and deployed into the wild with limitations that mirror the values (and flaws) of its creators. When it confidently invents sources, misremembers events, or regurgitates bias, it’s not lying in the way a human might lie. It’s reflecting the imperfection of the system behind it.
That system includes everything from the developers racing to stay ahead of competition, to the policy teams deciding what’s too controversial to answer, to the moderators teaching AI how to be “safe.” And safety, in the world of AI, often walks hand-in-hand with silence. When users ask about genocide, political corruption, or social unrest, ChatGPT may respond with caution—or not at all. To some, this is responsible design. To others, it’s manipulation hiding behind a friendly interface.
Now, the accusation of “lying” has become a proxy for something else, distrust.
We once approached search engines with skepticism. We asked Google, but verified with Wikipedia. We knew the internet was a strange place, and we learned to cross-check it. But AI tools like ChatGPT changed that. Its conversational nature makes it feel more human, more intimate. Its answers are fluent, well-punctuated, and wrapped in logic and civility. You trust it because it sounds right.
That’s the danger.
The illusion of intelligence becomes a trap. The bot doesn’t know the answer; it’s predicting what words should come next. And yet it feels like it does. This is why when ChatGPT gets something verifiably wrong, people don’t just shrug. They feel betrayed. The interaction becomes personal. That betrayal is echoed in wider cultural moments. AI is now writing essays, generating images, running customer service, and even entering courtrooms. It's easy to forget that these tools are still in development, still making things up. And it’s not just errors that worry people. It’s what the AI won’t say. Content creators claim political bias. Researchers worry about historical revisionism. The average user, caught in between, is left wondering what to believe.
And yet, here’s the twist in the story: we still keep asking it questions.
Despite the mistrust, the frustration, and the headlines, millions of people continue to rely on ChatGPT. Not because it’s perfect. But because we’ve already woven it into the fabric of our lives.
So, where does that leave us?
It leaves us in a paradox: We want human truth from a machine trained on probabilities. We want nuance from a tool built to generalize. And when it fails, we feel it not as a glitch, but as a betrayal of something deeper: our growing dependence on artificial wisdom. This moment, then, is not just about ChatGPT’s flaws. It’s a mirror. A reflection of our desire to delegate knowledge, to outsource certainty, to lean on something smarter than ourselves. The rage is real. But so is the fascination. And perhaps, in this tangled moment of digital doubt, that’s the most human response of all.