Johan Roos
May 7, 2026
May 7, 2026
Johan Roos
)
The leadership platform is impressive. Personalized pathways, instant feedback, a two-week program delivered in three days. Participants rate the experience highly. The L&D team celebrates the efficiency. No one asks: did anyone learn anything?
This scene is playing out across organizations. Early findings from the Hult Ashridge Leadership Learning Index (HALLI) help explain why.
HALLI participants are enthusiastic about learning. Ninety-five percent report high motivation, and 90% say last year's learning was a worthwhile investment. Yet the two methodologies rated most effective, experiential learning and learning communities, are offered by only 17% and 12% of organizations.
These methodologies remain highly valued, but they are not always the most requested. When asked how their organization should better support learning, AI integration was mentioned more often than experiential and peer-based approaches.
While demand for AI in learning is high, is it actually an effective learning tool? In practice, research suggests that the effortlessness of learning with AI may undermine durable learning.
Melumad and Yun at the Wharton School found that people who learned through AI-generated summaries developed shallower knowledge, exerted less cognitive effort, and produced less original thinking than those who searched for and integrated information themselves. The factual content was identical. The difference was effort. When AI assembles knowledge for you, the process of questioning, comparing, and synthesizing, the process that makes learning durable, disappears.
This matters beyond the classroom. New research from Project Iceberg at MIT finds that roughly 11.7% of the US wage base, about $1.2 trillion, sits in cognitive tasks that current AI systems can already perform. The exposure is five times larger than the visible tech-sector story and concentrated in administrative, financial, and professional service roles. These are the apprenticeship grounds where future leaders learn to think. Payroll data already shows a 13% relative decline in employment for workers aged 22 to 25 in AI-exposed occupations. This is how we hollow out the apprenticeship of struggle that builds professional authority.
I call this effortless learning. It feels productive, it looks efficient, and it quietly erodes what I call human magic, the capabilities leadership depends on.
In my new book Human Magic (Routledge, 2026), I explore how curiosity, the foundational capability for all leadership learning, is tested by AI through three specific mechanisms.
First, shallow questioning loops. AI produces plausible answers so fast that the state of not-knowing, which Socrates recognized as the precondition for genuine inquiry, collapses before it can deepen. Perplexity is the engine of learning. AI turns it off.
Second, curiosity offloading. Professionals delegate the act of framing questions to the algorithm. In a study of 1,923 high-use GenAI users, Baldeo found that AI reliance correlated strongly with prompt dependence and negatively with confidence in one's own reasoning. Fifty-eight percent of participants agreed that AI did most of the thinking during complex tasks. The less people overrode the algorithm's suggestions, the less confident they became in their own reasoning. Entry-level workers showed the lowest override rates and the lowest confidence of any group. Professionals working alongside AI are quietly outsourcing the act of questioning itself.
Third, algorithmic narrowing. Large language models optimize for consensus and probability, filtering out odd signals and new perspectives. The algorithm delivers what is most likely based on what is already known. Leadership judgment depends on detecting what is most important. These are rarely the same thing.
HALLI reveals a further pattern: the more senior the respondent, the more optimistic they are about organizational learning. This is consistent with broader evidence. Ranganathan and Ye found that 62% of entry-level workers reported burnout from AI-driven work intensification, compared to 38% of C-suite leaders. The TalentLMS 2026 L&D Report documents a 20-percentage-point perception gap between HR managers and employees on whether AI training is effective.
Senior leaders are more optimistic about learning and the next generation of leaders, and less exposed to the roles most rapidly being absorbed. The gap is cause for leadership concerns.
Design determines whether AI strengthens or starves learning. Dell'Acqua and colleagues show that AI amplifies performance where deep expertise already exists, expertise built through years of effortful practice. Without that foundation, AI delivers fluency without depth.
Most learning tools default to answer-giving. Few scaffold the effortful practice that creates expertise in the first place.
The argument is about design, not technology. The same tools that erode curiosity can, when configured with intention, preserve it. The choice will be made either way, by deliberate design or by default.
Curiosity requires holding a question open long enough to feel its weight. Empathy requires something harder: holding another person's experience alongside your own, without reducing it to a summary. If effortless learning erodes the first capacity, what does it do to the second? That is the question for the next article in this series.
1. Mandate the override
Teach learners to document where they disagreed with or reshaped AI output. Authorship grows from that friction.
2. Rebuild the apprenticeship
Redesign entry-level work so junior professionals practice the judgment AI cannot replicate. The Iceberg data suggests the apprenticeship ground is shrinking. Design it back in, or lose it by default.
3. Audit for agency
Track override rates, deliberation time before AI adoption, and the share of analyses where the human reasoning is visible. If your metrics reward ease, your system will produce it.
)
Professor of Strategy at Hult International Business School
Johan Roos is a Professor of Strategy and former Chief Academic Officer (2016-2024) at Hult International Business School, and Senior Advisor at Frucke Forum. This article draws on insights from his book Human Magic: Leading with Wisdom in an Era of Algorithms (Routledge, 2026).
This is the second article in a four-part series on AI and leadership. The first article, "When AI writes your strategy, what's left for you? Everything that matters," explored AI's impact on strategic thinking. The next article will examine AI and empathetic leadership.
)