
What will they do all day
Wish list for diana

Two Wall Street Journal stories got me thinking. When you read them together, they explain why so many companies feel stuck with AI, or feel further along than they really are.
One story quotes tech leaders arguing that what a CEO does might be “one of the easier things” for AI to do. Sundar Pichai said it. Sam Altman doubled down and talked about an AI running divisions, even entire companies, with decision-making getting “pretty good, pretty soon.”
The story is designed to provoke. The argument is if leadership is about updates, approvals, reviews, escalations, and market-facing narratives, it starts to look like a workflow. Workflows get engineered. Workflows get optimized. And, workflows get automated.
The other story shows something even more consequential happening right now inside real organizations: executives are increasingly confident about AI’s impact, while employees describe a very different day to day. This is a notable, but all-to-common AI leadership gap.
Two Realities Inside the Same Company
In the WSJ reporting, a Section survey of 5,000 white-collar employees found that two-thirds of nonmanagement workers said AI saves them less than two hours a week, with 40% saying that it saves them no time at all. On the other hand, 20% of employees say that AI saves them two-to-four hours per week. But does it really though.
This is just a personal observation. AI promised to free up time for creative thinking and higher-purpose work. Yet, a lot of high performers I talk to and work with find that AI makes them even busier!
In the same reporting, 33% of C-suite executives said AI saves them four to eight hours per week, another 24% said eight to twelve hours, and 19% said more than twelve hours.
At the same time, the Section AI Proficiency Report shows how different the emotional experience is depending on where you sit in the org chart. Individual contributors report being anxious or overwhelmed at far higher rates than the C-suite. In Section’s data, individual contributors show 68% anxious and 32% excited, while the C-suite shows 26% anxious and 74% excited.
So when leaders ask, “Why is adoption slower than expected?” or “Why are teams not moving faster?”, the answer is that people do not scale what they do not trust. They do not lean into what they fear. They do not volunteer for change when the consequences or upsides are unclear.
The AI Tax Is the Quiet Killer of Momentum, and Excitement
There is another force at work, and it explains why “time saved” can feel true in one meeting and false in the next.
Workday calls it an AI tax. This is the hidden cumulative costs and inefficiencies that organizations, and people, incur (and feel) when productivity is only measured by output, not quality or performance. The AI tax is levied when people have to spend unplanned time editing and correcting, verifying, and reworking AI-generated content.
For example, if you use AI to produce content and you don’t do the work to vet it before sharing, you are imposing an AI tax on your colleagues. And over time, that tax erodes trust and confidence in you. Who can afford that?
Their research calls out that for every 10 hours of productivity gained, about four hours are paid back in rework, correcting, clarifying, and refining AI output. This equates to a loss of speed and performance, even though you’re moving faster, while creating the need for an unplanned verification layer, and introducing a trust gap between people.
This is what many executives never see because it does not show up cleanly in a KPI. Drafting gets faster. Reviewing gets heavier. Output increases. Accountability becomes more fragile. Teams move quicker, and then spend the reclaimed time auditing, fixing, and defending the work.
Work does not disappear. It shifts. And it often shifts onto the people with the least margin, the least time, and the least psychological safety to take risks.
Section’s findings reinforce this new reality in a different way.
Most workers are still using AI for very basic tasks, and the time savings reflect that. Their report shows a large share of the workforce saving little or no time, and it also captures a blunt sentiment: 40% said they would be fine never using AI again.
Translation: many employees are resisting because the experience is not yet designed to earn trust and reduce friction.
Leaders are enthusiastic (even if they’re under pressure to accelerate adoption). Yet, in reality, employees are overwhelmed, overloaded, and unclear.
Fear Is Part of the Adoption Curve
The WSJ also cites a WSJ-NORC poll in which six in 10 respondents characterized AI and other new technologies as mostly a threat to the U.S. economy because of its potential to replace well-paid workers.
So yes, adoption slows. It’s not because people can’t or don’t want to learn, it’s because people are doing the math in their heads. It’s also what they feel. They are trying to figure out whether AI is meant to help them, measure them, or replace them, in their work.
And if leaders are serious about adoption, they cannot outsource this to comms, training videos, or mandates. The organization needs a shared language for what AI is for, where it fits, what “good” looks like, and what happens when the system is wrong.
If AI Can “Do the CEO Job,” What Should the CEO Become?
Let’s revisit that first WSJ story. Leaders can’t just focus on adoption and acceleration. They need to look in the mirror to understand how AI is evolving decision work.
In an AI era, the CEO becomes less of a lead decision-maker and more of a system architect.
The CEO’s advantage will come from designing how decisions happen, not merely being present when decisions are announced. It comes from building the conditions for trust, not just demanding speed. It comes from creating learning goals and loops, not just reviewing quarterly outputs.
That is what AI fluency means at the top of the house. It’s not just about adoption. It’s elevation and defined standards.
Section’s report captures part of the problem: many executives believe deployments are succeeding even while the rest of the organization disagrees.
What CEOs, and Their Advisors, Should Do This Quarter and the Next
If you are a CEO, board member, CIO, CHRO, COO, CAIO, or a transformation leader, treat this as an operating model redesign, not an AI strategy.
Start with reinvestment. If AI is giving leaders back hours every week, those hours are strategic capacity. Put them into redesigning the work itself…defined vision and strategy, clearer standards, better training that maps to roles and goals.
Upgrade your metrics. Hours saved is a vanity metric if the AI tax is quietly reclaiming the gains through rework. Measure the net value, including the time lost to correction, verification, and refinement. Be honest about it.
Close the fluency gap with clarity. The workforce needs to know where AI is expected to assist, where human judgment is required, how outputs are evaluated, and how accountability works when AI is wrong.
That is how trust is built. Trust is what scales adoption.
AI is changing work. At the same time, it is revealing leadership, or the absence of it.
It reveals the vision leaders have for where AI can take the company. It shows whether a company understands its workflows well enough to redesign them. It surfaces whether leaders are measuring the right things. It also reveals whether the organization has the courage to talk honestly about fear, uncertainty, skills, and the future of roles and the division of tasks between people and AI agents.
If AI ever “takes” a CEO job, it will be because leadership stayed static while everything else evolved.
Read Mindshift | Subscribe to Brian’s Newsletter | Consider Brian as Your Next Speaker
Leave a Reply