
Fear is rising. Trust is falling. And the AI industry has no one to blame but itself.
For the last two years, the loudest voices in AI have sold the future through a mix of inevitability, disruption, and dread: AGI is near, jobs are going away, the economy may collapse, and somehow this is all supposed to inspire confidence. It doesn’t. It creates narrative debt. And now that debt is coming due, especially among the very people who are supposed to inherit this future.
It feels like in the headlines, feeds, and expert commentary, we’re being sold a false choice with AI. Either worship it or fear it.
I’m an optimist. And I believe that AI can help us unlock entirely new possibilities that make humans matter more in driving growth, better outcomes, and a more productive future. That’s why I’m not buying the sensationalism, nor should you.
What I believe is this: the AI we need in headlines and in our feeds has a message problem.
The Narrative is Backfiring
For the last two years, the public has been bombarded with a relentless narrative: AGI is near. Superintelligence is around the corner. White-collar jobs are in jeopardy. Students are graduating into a future that may not have a place for them. Then, somehow, everyone acts surprised when people feel anxious, skeptical, even angry about AI. The New Yorker recently captured the contradiction well: if you keep telling people your AI will upend their lives, take their jobs, and maybe threaten humanity, they’re going to believe you.
Sam Altman recently made that contradiction impossible to miss. In one post, he wrote, “post-AGI, no one is going to work and the economy is going to collapse.” He also said that he was switching to polyphasic sleep because GPT-5.5 in Codex was so good he didn’t want to miss working with it. Put those two ideas together and you don’t get a coherent roadmap for society. You get cognitive and emotional whiplash: apocalypse on one hand, accelerationist adrenaline on the other.
And this is exactly where the narrative breaks. People are reacting to the story being told about AI by some of the people building it. And a growing number of people have had enough.
Gallup’s latest Gen Z research found that while 51% of Gen Z uses generative AI at least weekly, sentiment has moved sharply in the wrong direction. Excitement dropped 14 points to 22%. Hopefulness fell nine points to 18%. Anger rose to 31%. Anxiety held at 42%. Even more telling, 8 in 10 Gen Z respondents said using AI tools is likely to make it harder for them to learn in the future. Among employed Gen Z workers, 48% said the risks of AI in the workforce outweigh the benefits, and 69% said they trust work done without AI more than AI-assisted work. And, 44% of Gen Z workers admit to sabotaging their employers’ AI deployments as a form of rebellion.
The Federal Reserve Bank of New York says recent college graduates entered the end of 2025 with 5.7% unemployment and 42.5% underemployment, the highest underemployment level since 2020. So when young people ask, Where do I fit in this future? that is not fear talking. Unfortunately, it’s reality.
Predications are not Destiny
This is why I think the current AI debate needs to change for the better. We keep arguing capability as if capability alone determines destiny. It doesn’t.
Even inside the AI world, there is no consensus. Anthropic CEO Dario Amodei has argued that AI could disrupt 50% of entry-level white-collar jobs over the next one to five years. Yann LeCun pushed back, saying people should stop listening to AI builders on labor-market effects and start listening to economists instead. And Nobel-winning economist Daron Acemoglu has since argued that these sweeping predictions reflect “motivated reasoning” as much as insight, because labor outcomes depend not just on model capability, but on wages, organization design, job redesign, and whether new forms of work are created at all.
If the people creating the tools cannot agree on what happens next, then maybe we should stop treating every dramatic forecast as inevitable truth and start treating it as what it is: narrative. And right now, that narrative choice is eroding trust.
One reason is that people can see the contradiction in real time. On one side, the public is told not to panic. On the other, executives are openly exploring how AI might absorb more and more managerial and knowledge work.
One report, citing a Wall Street Journal scoop, says Mark Zuckerberg is building a CEO AI agent to help him retrieve information and do parts of his job faster as Meta flattens teams and pushes AI deeper into the organization. The same report says employee reviews are now partly tied to AI usage and describes a rogue internal AI incident that exposed sensitive data for nearly two hours. Whether you view that as innovation or inevitability, it sends a clear message: automation is no longer being aimed only at repetitive tasks. It is climbing the org chart.
So yes, AI has a trust problem. But trust is the downstream effect. The upstream cause feels like cognitive dissonance.
You cannot tell people AI will make them more productive while also telling them it may wipe out the bottom rung of the ladder. You cannot ask students to embrace AI as essential for their future while they watch employers flatten entry-level roles, automate knowledge work, and celebrate being able to do more with fewer people. You cannot effortlessly oscillate between “don’t worry” and “the economy might collapse” and expect the public to interpret that as visionary leadership.
Th Real Opportunity Here is Leadership
At this point, I can’t believe that this is strategy. In fact, I would say that is all starting to accumulate as narrative debt.
And beneath all the noise is a much more important story that leaders are missing.
OpenAI says the typical power user taps about 7x more “thinking power” than the typical user. They call this the “capability overhang:” the widening gap between what AI can do and what most people, companies, and countries are actually doing with it. To me, this is the real divide that matters. The future won’t split neatly between people who have AI and people who don’t. It will split between people who let AI do their thinking and people who use AI to elevate how they think, create, decide, and build.
Here is an invitation for leadership to swoop in…
The real job of leaders right now is to lower the temperature and raise the standard. It is to replace vague hype with a believable path. It is to invest in skills, redesign work, create new on-ramps for early-career talent, and show people how AI can make them more capable, more creative, and more valuable, not more disposable. AI automation is one part, but that’s just a cost center story…and it’s an AI status quo story. But AI augmentation + automation unlocks new value, creating a story of cost takeout and human + AI value creation.
People don’t just want to better understand the future, they want to understand their part in it.
If leaders cannot answer that with clarity, conviction, and humanity, the backlash we’re starting to see now will look basic in hindsight.
The winners in this era will be the ones who make the future feel believable and aspirational. They will build systems where AI expands human potential instead of shrinking human possibility. They will replace fear with fluency, hype with judgment, and inevitability with intention.
If you are a CEO, your job is to explain what AI means for your people, how work will change, how learning will evolve, what new roles will emerge, and why human judgment will matter even more than roles void of human thinking, creativity, and imagination. Show them the roadmap. Show them the reskilling path. Show them how AI can make them more capable, more creative, and more valuable, not more disposable.
That’s leadership.
The future of AI is not something that just happens to us. The future of AI happens because of us.
That is the advantage.
Not automation. Augmentation.
Not doomsday prophecy. Vision and purpose.
Not smarter machines alone. A smarter, more human future because with AI.
The future of AI does not need more prophets. The future needs leaders people can trust.
Read Mindshift | Subscribe to Brian’s Newsletter | Consider Brian as Your Next Speaker
Leave a Reply