A lot of conversations I hear about AI focus on whether it’s “good” or “bad.” But for me, that question’s already a distraction. My concern is around whether AI is producing “average”.
This more interesting risk of AI is subtler – and more technical. It’s about how today’s AI models are built, and what they optimise for: not excellence, not invention – but probability.
Large Language Models don’t imagine. They estimate. They take what we’ve already said, done or documented and then make the statistically most-likely next decision based on it. Impressive? Useful? Yes. But fundamentally designed to trend toward the average.
And I keep wondering: what happens when we let systems trained on the mean decide what comes next?
What do we lose when originality is quietly smoothed out of the pipeline?
The Cost of Hiring to the Mean
You can already see it creeping into hiring.
When we review applications at Counter, it’s often easy to spot when someone’s used AI tools. Not because the writing is bad – quite the opposite. It’s polished, structured. It ticks every box. But it rarely says anything new.
The ones that stand out are the ones that interrupt the pattern. An applicant recently mentioned that they’re into ice skating and gliding in their spare time. It’s the kind of detail that makes you pause – specific, human, un-average.
It reminds you there’s a person behind the application. Someone who might see problems differently. Maybe they won’t. But at least they’re offering more than a refined echo of what’s been said a hundred times before.
That’s what high-potential engineers do too. They don’t just integrate – they shift perspective. They question assumptions. They flag edge cases no one else saw. They bring more than just clean code. They bring texture.
If we keep filtering for what looks safest – what looks average – we risk building teams that are tidy, compliant, and quietly forgettable.
“The Final End of the Past”
Adam Curtis captured my thinking in a much more eloquent way than I could in a recent podcast when he called AI “not the future, but the final end of the past.”
It’s a brilliant line – and one that gets uncomfortably close to the heart of this.
He suggests that generative AI isn’t projecting us forward – it’s feeding us a haunted remix of what we’ve already done.
“They are taking our own past and haunting us with it… telling us it’s new, but actually keeping us stuck in the past.”
If you’re leading a team, and your roadmaps are full of unknowns and complexity – do you want that solved by a system designed to reflect what’s already been? Or by a team that can think beyond it?
When Average Gets Mistaken for Truth
There’s another layer to all of this and it’s creeping in quietly.
More and more, people are treating generative AI like a source of truth. Ask it a question, get an answer, move on. But GPT tools aren’t built to know facts. They’re built to generate plausible content based on what’s come before.
They’re prediction engines, not truth engines.
And yet we’re already seeing interfaces reinforce the opposite idea. When Google puts a Gemini-generated answer above the search results – visually positioning it like a factual answer to your query – it trains users to trust a model that was never optimised for accuracy in the first place.
That’s how average then becomes invisible, because it’s packaged so confidently as ‘truth’.
What We Should Really be Asking
None of this is an argument against AI. It’s a reminder to check what we’re optimising for.
If your team values speed, efficiency and fast iteration – AI can help. But if you want people who challenge defaults, provoke better conversations, and build systems that don’t just scale but surprise – you’ll need to create space for the outliers.
That starts with asking better questions:
- Is this person memorable, or just marketable?
- Is this idea efficient, or is it new?
- Is this code predictable – or is it going to change the way we think?
The best engineers I’ve worked with aren’t average. They’ve brought something specific, different, uncomfortable – and better.
We should keep building with AI. But let’s not forget to keep looking for the humans who do what probability never will.