AI Isn't Lying — But It's Not Telling the Truth Either

April 09, 2024

I keep a ChatGPT window open almost all day. It’s become a core part of my workflow; whether I’m testing detection logic, writing policy, iterating on security architecture, or just thinking out loud. In many ways, it’s like having a highly responsive assistant that never gets tired and always has a starting point. I tend to use it as my rubber ducky. But the more I’ve used it, the more I’ve noticed something else: I’ve become judgy about how other people use AI.

There’s a difference between collaborating with AI and just copying from it. Too often, I see people accept responses at face value, pasting results into reports or decisions without context, without verification, and without ownership. That’s not intelligence — artificial or otherwise. That’s outsourcing judgment. And judgment is one thing AI can’t reliably provide.

Yes, I know, this isn’t a new warning. “Trust but verify” has been a guiding principle in security and computing for decades. But here’s the thing: people aren’t just outsourcing fact-checking, they’re outsourcing responsibility. In high-trust environments, AI output can start to feel like consensus. And when multiple people start accepting it without pushback, you get a feedback loop that looks a lot like groupthink… only faster, and harder to unwind.

It’s also easy to forget what ChatGPT really is under the hood. Every output is generated by predicting the next most plausible word, not by reasoning or verifying facts. The result might feel accurate because it’s designed to sound polished and confident. That polish can be persuasive, especially when you’re short on time or under pressure to deliver. But fluency doesn’t equal truth, and the more confident the model sounds, the more carefully it should be checked.

To be clear, I’m not suggesting AI can’t be trusted at all. For low-risk or well-scoped questions, it’s incredibly efficient. It can recall syntax, clarify concepts, summarize known standards, or even help outline a tricky memo. But when the stakes rise, especially in security, strategy, or research, AI needs to stay in the passenger seat. It’s a powerful assistant, not a decision-maker.

That’s why I don’t treat it as a source of truth. I treat it as a second brain that needs fact-checking. I keep my own notes externally, maintain my own reasoning trail, and make sure I understand what’s being said before I act on it. It’s an invaluable tool, if its limitations are understood and accounted for.

What concerns me more than hallucinated facts or awkward phrasing, and this is usually the kicker that I notice, is the false confidence that builds when trust is given too freely. If you’re not careful, ChatGPT will start telling you what it thinks you want to hear. And if you don’t bring your own critical lens to the conversation, you end up in an echo chamber of your own design.

I’m not anti-AI. I use it constantly. But I don’t let it replace the thinking I’m responsible for. In the end, I own the outcome, not the model.