Two Conversations About AI, One Building
Executives and managers aren't disagreeing about AI. They're having entirely different discussions.
Something odd is happening inside organizations right now, and it showed up clearly in a study the Wharton School published this week. The executive suite and the management layer are both talking about AI. They’re using similar language. They’re attending the same town halls and reading the same strategy decks. But they are not having the same conversation.
Nearly two-thirds of executives say they’ve become significantly more positive about AI over the past year. They see it as a strategic priority. They’re investing heavily. Some are restructuring their organizations around it. For senior leadership, AI has moved from an interesting capability to an existential commitment — the question isn’t whether to go all in, it’s how fast.
The managers one or two levels below them? They’re drowning.
The gap isn’t disagreement — it’s altitude
The Wharton/GBK Collective study has been tracking this dynamic for multiple years, and the pattern is consistent: executives experience AI as an opportunity. Managers experience AI as a workload.
That’s not because managers are resistant or uninformed. It’s because they sit at the exact altitude where strategy becomes operations. They’re the ones who have to reconcile the CEO’s enthusiasm with the fact that the approved tool doesn’t integrate with the case management system. They’re fielding questions from their teams about what’s allowed and what isn’t — often without clear answers, because the governance policy is still being drafted. They’re absorbing the productivity overhead of learning new tools while still delivering on every existing deadline.
Executives don’t see this overhead because it doesn’t appear in the metrics they review. Adoption dashboards show seats activated and usage trends. They don’t show the manager who spent three hours this week answering the same AI governance question from four different direct reports because the policy FAQ doesn’t exist yet.
The executive sees a line going up. The manager feels the weight behind that line.
Where the pressure compounds
This altitude gap produces a specific set of downstream problems that I see repeatedly from inside an organization going through this.
The first is unfunded mandates. Leadership communicates that AI adoption is a priority. But the time, training, and governance infrastructure required to adopt responsibly aren’t budgeted separately. They’re absorbed by the existing management layer on top of everything else. The implicit expectation is that managers will figure it out — will become AI champions in addition to their actual roles, without reduced workloads or additional resources.
The second is phantom consensus. Strategy decks present AI adoption as an aligned organizational priority. Everyone nods in the planning meeting. But alignment at the strategy level doesn’t mean alignment at the implementation level. The manager who nodded in the meeting goes back to a team that has no idea what’s expected, using tools that half-work, under policies that are still in draft. The strategy deck says “aligned.” The floor says “confused.”
The third is consequence asymmetry. The Fortune/Workplace Intelligence survey found that 60% of executives would consider cutting employees who refuse to adopt AI. But only 14% of enterprises have a clear AI strategy. Executives are prepared to enforce adoption of something they haven’t fully defined. The consequence falls on the people closest to the work, while the strategic ambiguity originates at the top.
What the sabotage data is actually telling us
This is where the 44% Gen Z sabotage number from this week’s headlines becomes less scandalous and more predictable.
When managers are unsupported, their teams feel it. The confusion rolls downhill. If a manager doesn’t have clear governance guidance, their team gets inconsistent answers about what’s allowed. If a manager hasn’t been given time to understand the tools, they can’t coach their team on effective use. If a manager is overwhelmed by the implementation burden, their team reads that stress and interprets it — correctly — as a signal that AI adoption is creating problems, not solving them.
The 26% of sabotaging employees who say the strategy is poorly executed aren’t making an abstract complaint. They’re reporting what they observe at the manager level every day: pressure without support, mandates without clarity, consequences without strategy.
The sabotage isn’t coming from below. It’s flowing downhill from above.
The intervention that most organizations skip
The conventional response is more training, better tools, clearer communication from leadership. Those help. But they miss the structural problem.
The structural intervention is resourcing the middle. Giving managers dedicated time for AI governance and enablement work. Reducing their operational load during the adoption period rather than adding to it. Creating feedback channels that surface implementation friction to leadership before it calcifies into resistance.
The organizations I’ve seen move fastest on AI adoption aren’t the ones with the biggest budgets or the most ambitious CEOs. They’re the ones where middle management has actual capacity to do the work that adoption requires — the unglamorous, invisible, operational work of translating executive vision into something a team of eight people can actually execute on a Tuesday afternoon.
That work doesn’t appear on an adoption dashboard. It doesn’t generate a conference keynote. But it’s the difference between a strategy that lands and one that produces a 44% sabotage rate.
The executives and the managers aren’t enemies. They’re not even disagreeing. They’re just standing at different altitudes, describing different views of the same mountain — and nobody’s built the trail between them.
---
I made a video this week breaking down the full research — the sabotage data, the Wharton findings, and a framework for measuring sophistication instead of activity.


