- The Prohuman
- Posts
- Your favorite AI model might be lying to you
Your favorite AI model might be lying to you
Phare reveals how LLMs make things up and why it matters
Welcome, Prohumans.
We’re about to show you how LLMs “think” and how they end up making things up with total confidence.

Master AI Agents 👨🏭👩🏭
What if your tasks kept running even when you walked away?
While most people juggle tabs and chase tasks, a few are handing it all off to AI agents that don’t assist, they act.
Imagine waking up with your inbox cleared, content posted, and calendar perfectly managed — all handled quietly, without you lifting a finger.
On Friday, May 16th at 10 AM EST, we're pulling back the curtain.
It’s a Masterclass showing you how to architect AI agents that actually run your work across 40+ tools so you can focus on the parts only you can do.
Also you can monetise these agents. Let’s discuss everything in the Masterclass.
The first 50 seats are free. Then the doors close.
If you're ready to step into the next version of how you work, now’s the moment.
How LLMs make things up and why we believe them

Giskard just released the first results from Phare, a multilingual benchmark evaluating hallucinations in large language models. What they found is as troubling as it is timely.
Here's what you need to know:
Popular models like ChatGPT and Claude can sound convincing while delivering factually false answers.
User satisfaction doesn't mean factual reliability; high-ranked models often hallucinate more.
A confident tone in user prompts significantly lowers models' willingness to debunk false claims.
System instructions like "be concise" can degrade factual accuracy by up to 20%.
Some models, including Anthropic's and Meta's largest LLMs, show better resistance to sycophancy.
Hallucinations aren't just technical bugs—they're shaped by training data, user behavior, and app constraints.
Real-world deployments face increased risk when optimizing for speed, brevity, or friendliness over truth.
The most dangerous thing about hallucination isn't just that it happens it's that we often can't tell. Users trust confident, fluent responses. But trust shouldn't be given freely just because a model sounds smart. As we build with LLMs, we need to ask not just "Does it answer?" but "Is it right?"

Whenever you’re ready, there are 2 ways we can help you:
Help you promote your product and service to 700k+ engineers, AI enthusiasts, entrepreneurs, creators, and founders. Sponsor us
Help you build a irresistible brand on 𝕏/Twitter and LinkedIn in less than 6 months. We’ve helped 100+ creators from YouTube, entrepreneurs, founders, executives, and people like yourself. Contact us here: [email protected]
Thanks for reading…
That’s a wrap.

What's on your mind?
Share your best ideas with us at [email protected].
We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.
Did you find value in our newsletter today?Your feedback can help us create better content for you! |

![]() | I hope this was useful… If you want to learn more then visit this website. Get your brand, product, or service in front of 700,000+ professionals here. Follow us on 𝕏/Twitter to learn more about AI: |