Leading Through the New AI Frontier: How Organisations Are Turning Exploration into Real Value

Across all industries and sectors, one thing is clear: organisations are no longer asking if AI will matter. They are asking how big the impact will be, and what it will take to realise it.

At a recent Tech Leaders Lunch hosted by Counter, a group of senior technology leaders from media to aviation, entertainment, and enterprise got together not to celebrate the promise of AI, but to wrestle with its growing pains and practical realities. The resulting discussion was refreshingly grounded. Adoption may be widespread, but turning AI enthusiasm into measurable, repeatable value is still something that most organisations have yet to perfect.

AI Adoption: Fast, Fragmented and Full of Hype

Leaders opened by acknowledging how rapidly AI is being embraced. Tools like Claude, large language models (LLMs), agentic approaches and open source frameworks for ML Ops are now part of everyday workstreams. Even organisations that once lagged behind are now experimenting and building.

But speed has not translated into consistency.

“Big companies are adopting [AI] more and more,” observed one participant. “But in many places, last year feels like it was lost, because AI moved faster than our ability to operationalise it.”

Across sectors, teams are piloting AI in pockets, in creative workflows, operational tooling, or analytics, without a clear enterprise-wide roadmap. Adoption often looks like enthusiasm over strategy, with success defined unevenly from team to team.

Production-Ready AI Requires New Operating Models

A major theme was the gulf between experimentation and production-ready AI.

Deploying a model is one thing. Ensuring it is robust, trustworthy, governed, and useful over time is another.

Participants pointed to the absence of standard lifecycle practices:

  • No shared testing frameworks
  • Limited monitoring of model drift
  • Weak governance over usage and safety
  • Software development lifecycles that have not adapted to AI

“AI is not software with a feature flag,” said one person. “It is a whole different operational discipline.”

One executive summed it up simply: DevOps gave us reliable software. MLOps, alongside governance, will give us reliable AI.

Governance and Guardrails Aren’t Bureaucracy, They’re Value Engines

Contrary to what some skeptics fear, tech leaders didn’t see governance as an impediment, but rather as an enabler.

From media organisations to aviation groups, they described the need for:

  • Clear policies on what models may or may not be used
  • Guardrails that protect brand reputation
  • Procurement practices that centralise control without stifling innovation
  • Cross-functional oversight teams

One attendee confessed that the AI governance cadence at their organisation is relentless. Every two weeks, the procurement team reviews approved tools and usage patterns. Another attendee spoke about how they have had to establish an AI-enabled task force that reviews deviations to understand risks.

Without guardrails, experimentation becomes chaos that hurts trust, compliance, and ultimately an organisation’s ability to scale AI.

People Still Matter, Maybe More Than Ever

If technology were the only barrier, AI adoption would already be smooth. But it isn’t.

Participants spoke at length about the human challenges:

  • Cognitive overload from chaotic tooling
  • Resistance because AI forces people to change habits
  • Teams unsure how to ask the right questions of AI
  • Boards that focus narrowly on coding productivity

“AI may be the engine,” one leader said, “but people who trust, govern, and manage it are the drivers.”

In fact, attendees made an intriguing point. As AI becomes more capable, the value of human-led work that is trustworthy, ethical, and accountable will increase. Authentic journalism, reliable customer interactions, and ethically governed services will stand out in a world of automation.

Data, Infrastructure, and the Cost of Scale

Another consistent theme was the importance, and difficulty, of modern data infrastructure.

Data still lives in silos. Organisations making early AI progress are those with strong foundations: modern pipelines, clear governance, and specialists who understand both business context and technical nuance.

But many participants agreed that data readiness is the real bottleneck. AI models are only as good as the data behind them. Without high-quality, well-governed data, even the best models underperform.

The conversation also acknowledged the hidden costs of AI. Energy consumption, infrastructure spend, and ecological impact are becoming harder to ignore. “Cheap AI for housework,” one attendee noted, “but will you pay the true cost at scale?”

Sovereignty and Strategic Choice

A striking undercurrent of the discussion was AI sovereignty, and the desire to build capabilities that are not entirely dependent on a few external providers. Interestingly, the UK government has announced an investment of £500 million as part of the Sovereign AI unit

Should organisations build their own models, hosts, and internal capabilities? At what point does relying on public LLMs introduce risk?

Leaders were cautious but curious. While adoption of public AI models provides immediate benefit, it also creates new dependencies. Governance, sovereignty, and regional legislation were all cited as factors that organisations will need to reckon with in 2026 and beyond.

A More Mature View of the AI Bubble

Contrary to early hype cycles, the consensus was not that AI was in a bubble, but that the hype phase has already passed.

Leaders are no longer talking about AI as a magical future. They are talking about it as:

  • A transformation of workflow
  • A competitor that must be operationalised
  • A capability that requires clear metrics
  • A risk that must be managed

In other words, AI is not the promise anymore. It is the problem to solve.

 

What Success Looks Like in 2026

From this discussion, a few success themes clearly emerged:

1. AI Value Must Be Measured

Organisations need clear metrics, both quantitative and qualitative, to track impact.

2. Governance Is a Competitive Advantage

Governance isn’t just about compliance. It allows trust, scale, and brand protection.

3. MLOps is the Next Enterprise Capability

Without it, organisations cannot reliably deploy or maintain AI at scale.

4. Data Readiness Is the Foundation

Infrastructure and high-quality data are the biggest determinants of success.

5. People Matter More Than Tools

AI does not replace human judgment. It actually amplifies the need for leadership, ethics, and organisational alignment.

 

The Road Ahead

As we start 2026, the tech leaders at Counter’s Tech Leaders Lunch weren’t dazzled by the possibility of AI. They were focused on its practical application. That shift, from possibility to responsibility, is where value truly begins.

AI is not something organisations will simply adopt. It is something they need to learn to govern, operationalise, and measure. Those that do will create the next generation of trusted, resilient, and impactful businesses.

We will also be running this event in Leeds on 18th March 12-2pm at CrowdedHouse, if you fancy taking part in this discussion please email events@counter.partners to let us know. 

We use cookies