#32 - Beyond the Fear: Balancing AI Panic with Purpose
Reading The Atlantic at 30,000 feet on a flight to Orlando reminded me: AI isn’t just a tech challenge. It’s a wake-up call for all of us—leaders, parents, humans.
I read The Atlantic's piece—“What Happens When People Don’t Understand How AI Works?”—on a Sunday morning flight to Orlando. Link here - https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/
I was heading to deliver a keynote to tech leaders about job loss, job change, and the future of work. As I watched people toggle between Teams, Fox News and CNN, it hit me:
As I often say: "AI isn’t about tech replacing people—it’s about people evolving alongside tech. That’s a mindset shift, not a software upgrade."
It’s not just AI illiteracy—it’s AI misalignment.
We’ve sold machines as minds. We’ve made algorithms sound empathetic. And now that the curtain’s pulled back? People feel confused. Even betrayed.
One quote in the Financial Times called AI a "hideously expensive myna bird"—a mimic with no understanding. That’s what happens when hype outpaces education.
The Atlantic went further—sharing examples of emotional harm, like users forming dependencies on AI bots, or AI hallucinations leading to real-world confusion. These aren’t fringe cases—they’re signals. Signals that when people don’t understand what AI is (and isn’t), they project humanity onto a tool. And when it doesn’t act human back, the trust breaks.
We’ve Been Here Before
Every major innovation comes with a panic cycle:
People thought electricity would leak out of walls and kill us.
They thought the internet would ruin conversation.
They resisted CGI until it redefined filmmaking.
We’re seeing the same with AI. It feels scary because we haven’t reframed it yet.
We fear the unknown. But eventually, we adapt. The question is: who leads that adaptation? Because if we leave it to chance, we risk building systems that reinforce our worst habits, not our best intentions.
What Are We Actually Afraid Of?
Let’s call it what it is:
Loss of control
Fear of the unknown
Fear of being exposed
We don’t know what’s around the corner. This is why I stopped using 3-year roadmaps. Instead, I use daily GPS systems—a mindset built for flex, not rigid, monolithic action.
Planning is still important—but adaptation is critical. We need to build organizations (and mindsets) that can pivot in motion, not wait for certainty. That motion happens in weeks and months, not years.
Where Leaders Are Struggling
Here’s what I see over and over:
Mindset — Thinking AI is a tech upgrade, not a transformation.
Heartset — Forgetting to keep the human at the center of the design. When emotional impact is ignored, trust erodes.
Skillset — Not knowing what skills to build for an AI-integrated world.
Work Design — Still organizing around jobs instead of outcomes.
Toolset — Delegating to IT instead of owning the strategic vision.
Intentionality — No time carved out to learn, explore, or rethink.
This isn’t resistance. It’s fear. Dressed up as “busyness” - not sure if this is a word.
And in many cases, it’s deeper than fear—it’s identity. If your career has been built on knowledge ownership, what happens when knowledge becomes searchable? Shareable? Generative? FOBO - Fear of Becoming Obsolete!
That’s not just a job shift. That’s a self shift.
A line I find myself saying a lot lately: "You can’t change work until you change how people think about work. AI just exposes the urgency of that truth."
Let’s Use All the Data
The media often seizes on the negatives—and yes, that frustrates me. I get tired of hearing how AI is going to ruin everything, especially when we’ve barely scratched the surface of what it can improve. We are in the first inning of a long game. Let’s talk about what’s real: AI helping detect cancer earlier, supporting mental health with faster insights, and giving educators more time to teach instead of tracking paperwork. This isn’t just about automation—it’s about augmentation, for real people, in real time.
And it’s not just me saying this. HRD recently shared research suggesting that AI might not impact GDP as dramatically as expected. Economists predict only a modest lift—maybe 1.1% to 1.6% in GDP growth over the next decade. Some see that as underwhelming. I see it as a wake-up call: if we want exponential outcomes, we need intentional use—not passive hope. By the way, increases in GDP are not bad!
We can either let those stats discourage us or recognize them for what they are: a reminder that technology alone doesn’t create progress—people do.
We need to stop asking, "Is AI good or bad?" and start asking:
"What will it become in our hands?"
That’s how we lead with purpose—not paranoia.
The Real Question We Should Be Asking
"What can we do to have humans and machines team together to create the best possible outcomes for all of us?"
That’s the question. Not "how do we stop AI?" or "how fast can we adopt it?"
It’s how do we lead it? How do we infuse it with purpose?
Because the biggest risk isn’t replacement. It’s stasis.
Do nothing, and your org becomes the next Blockbuster. And the world will move on—with or without you.
But if we act with urgency and intention, we have a shot at something remarkable. Not just more productivity—but more possibility.
This Isn’t Just About Work
This mindset shift? It’s not today’s Monday morning meeting. It’s systemic.
We need to ask:
How does this change education?
How does it reshape government?
How does it redefine humanity?
And honestly, watching what happened in Los Angeles this weekend, I have doubts. Doubts about whether we're ready. Doubts about whether we want to be.
But I also wake up every day with one thought:
We can build a better world.
AI won’t replace us. But it will force us to confront who we are—and who we want to become.
Like I try to infuse into all of my work whether with clients, writing or speaking on podcasts: "The future of work isn’t about keeping up—it’s about catching up with ourselves. With purpose. With what matters."
We can either fight the future or forge it.
How to Talk About AI When the Headlines Are Negative
Here are 5 reframing strategies I’ve been using in conversations with execs, teams, and skeptics:
Use fear as a signal, not a stop sign: Headlines are prompts. Respond with, “You’re right to have questions—so let’s answer them together.”
Anchor to human impact: “AI isn’t about tech. It’s about saving lives, improving care, freeing up time.” Make it personal.
Shift the debate: Don’t argue hype vs. doom. Say: “The tech exists. Now it’s about our readiness and responsibility.”
Tell stories, not specs: Use examples that matter—health, education, equity. Facts inform, but stories inspire.
Make room for healthy optimism: Most people aren’t anti-AI. They’re overwhelmed. Give them space to hope—and a role to play.
The Headline I Hope We Write
"Remember when humans had to do this? What a stupid period in time. And look where we are now—because we led with intentionality and humanity."
That’s the story I want to help write.
Let’s be the generation that didn’t panic and freeze. Let’s be the one that learned, adapted, and acted with both head and heart.
About Jason Averbook
Jason Averbook is a globally recognized thought leader, advisor, and keynote speaker focused on the intersection of AI, human potential, and the future of work. He is the Senior Partner and Global Leader of Digital HR Strategy at Mercer, where he helps the world’s largest organizations reimagine how work gets done — not by implementing technology, but by transforming mindsets, skillsets, and cultures to be truly digital.
Over the last two decades, Jason has advised hundreds of Fortune 1000 companies, co-founded and led Leapgen, authored two books on the evolution of HR and workforce technology, and built a reputation as one of the most forward-thinking voices in the industry. His work challenges leaders to stop seeing digital transformation as an IT project and start embracing it as a human strategy.
Through his Substack, Now to Next, Jason shares honest, provocative, and practical insights on what’s changing in the workplace — from generative AI to skills-based orgs to emotional fluency in leadership. His mission is simple: to help people and organizations move from noise to clarity, from fear to possibility, and from now… to next.
One of the key mindshifts in knowledge work will be disconnecting time and money. As in, a person gets paid $x for this work per hour. Instead we'll have to focus on what is the value of what the person is producing with AI, since it will be much faster to produce than it has in the past. It's important the output is quality and the speed will increase the value, but we'll need to pay people based on the outputs, not just the time they spent creating it. This will be a huge shift in thinking and will require us to really quantify the value of information.
I’m so happy to see people talking about our responsibility in all this. Leading with purpose and intention rather than fear and reactivity. Thank you for your work in this realm and for using your platform to help people see!