In the closing keynote at the Digital Growth Summit 2025, Daniel Hulme, Chief AI Officer at WPP, CEO & Founder of Satalia, and a 25-year AI veteran, brought a welcome dose of clarity to a field often clouded by hype. Drawing on decades of hands-on experience, he challenged many of our assumptions about data, intelligence, and the real role of AI in decision-making.
This article distills his most thought-provoking takeaways into a practical framework for understanding AI’s true business impact.
📹 Watch Daniel Hulme’s Digital Growth Summit closing keynote on YouTube.

For the past 15 years, the dominant business practice has been to gather more data. Organizations have invested billions in building data lakes, analytics dashboards, and hiring teams of data scientists, all based on the core assumption that if you give smart people better data, they will make better decisions.
According to Hulme, that assumption is wrong.
The real bottleneck isn’t a lack of information; it’s the inability to make optimal choices based on that information. A core reason that insights fail to translate into better decisions is a simple truth: humans are “rubbish at making decisions.”
We are especially ill-equipped for making optimal decisions in complex scenarios where people over-rely on their intuition (often making “confidently wrong decisions”) or are faced with multiple variables (“anything more than seven [variables], don’t use a human for”).

The correct approach is not to start with data and search for insights, but to start with the specific decision you need to make and work backward to find the data and algorithms required to solve it.

Hulme argues that most systems currently labeled “AI” are not intelligent at all. It’s automation: systems that give the same output for the same input every time. He draws a parallel with the common definition of stupidity (“doing the same things over again, expecting a different answer”).
By that definition, automation is “stupid,” not because it’s ineffective, but because it lacks adaptivity. True intelligence, Hulme says, is goal-directed adaptive behavior:
A truly intelligent system doesn’t just execute a task: it “makes decisions, learn whether those decisions are good or bad, adapts itself so next time it makes better decisions.”
While most current systems fail this test, the recent explosion of Generative AI seems to promise something more adaptive. Yet, Hulme also offers a crucial dose of realism to temper the hype.
Large Language Models like ChatGPT are powerful, but imperfect. Hulme compares them to an “intoxicated graduate”: articulate, clever, and confident, but frequently wrong.
To make these systems genuinely useful, they must be guided and augmented. Hulme outlines four key methods:
These methods represent different levels of control and investment, helping businesses move from raw capability to reliable performance.

When engineers build systems, their standard approach to risk management is to think about all the ways it could go wrong. They design for failure points and create mitigations. Hulme insists that with AI, leaders must now ask a completely new and counter-intuitive question: “What happens if my AI goes very right?”
For the first time, we are building systems that can “massively overachieve” their stated goal, which can cause unintended and harmful consequences elsewhere in the system. He gives a potent example: an AI tasked with optimizing marketing could, by exploiting human biases like homophily (our tendency to trust people like us), create a world of “you selling to you.” This could dangerously reinforce social bubbles, systemic bias, and bigotry. This requires a much more sophisticated, systems-level approach to AI governance that anticipates the second- and third-order effects of success.
Hulme’s message is a call to rethink our relationship with AI.
His keynote reminded the audience that the future of AI isn’t about replacing human judgment, but about augmenting it, if we’re willing to ask the right questions first.
Join our community of industry leaders. Get insights, best practices, case studies, and access to our events.
"(Required)" indicates required fields