Nope. Ai is not overhyped. If anything, it is under-understood and under-planned for. And “hype” isn’t the right word. This is serious. Ai is already reshaping society, economies, and industries more fundamentally than most of us are appreciating. It’s affecting personal finances, careers, and livelihoods. It’s also altering organizations, business models, and simply how work gets done.
I don’t pretend to know the full extent of what’s coming, or the timeline. What I do know is that most leaders, and most leadership teams, aren’t approaching the challenges and opportunities with a big enough lens. Too often they’re simply viewing Ai as a tool to improve efficiency and effectiveness, rather than a force that demands systemic redesign.
This is why I appreciate Yuval Noah Harari’s framing of Ai as Alien Intelligence. It denotes a more powerful force that must be dealt with as an enduring relationship, with respect, thoughtfulness, and nurturing. Instead, many are being held back by inertia, resistance that drastically elevates risk. Leaders must thus break from that steady state and deliberately reimagine business models, work, and incentives and, as important, redefine what a workforce is in the age of Ai, AGI, and self-evolving systems.
Just months before 9/11, a friend of mine – a PhD historian like Dr. Harari – shared something profound that has stuck with me to this day: “Al, most Americans don’t really get history. They haven’t lived through it. They’ve had it easy. Wars, depression, systemic oppression… these aren’t real to them. They’re abstract concepts in movies where characters overcome them on the way to a happy ending. But history hasn’t ever worked that way.” Even if I’m not recalling the words exactly, the essence is there.
Carl Sagan offered a similar warning in The Demon-Haunted World (1995):
“I have a foreboding of an America in my children’s or grandchildren’s time when the United States is a service and information economy, when nearly all of the manufacturing industries have slipped away… when awesome technological powers are in the hands of a very few… when the people have lost the ability to set their own agendas or knowledgeably question those in authority… we slide, almost without noticing, back into superstition and darkness… the 30-second sound bites… lowest-common-denominator programming… especially a kind of celebration of ignorance.”
Sagan’s point was stark: a society with extraordinary technology but without scientific literacy, skepticism, or grounding in truth is not just at risk, but on a combustible path. The same is true for organizations.
Here lies arguably the most dangerous myth: the belief that everything will just work itself out. It’s comforting, of course. It absolves leaders of responsibility. It suggests that markets, “the invisible hand”, or the simple passage of time will sort out the hard parts: inequity, unemployment, reskilling, energy strain, social stability.
History tells us otherwise. From sweatshops in the industrial revolution, to mass unemployment in the Great Depression, to financial crises that destroyed jobs and livelihoods, the times when leaders hoped “it will just work out” were the times of greatest human cost and, related, greatest organizational and societal cost.
This is the connective tissue across all the myths in this series. Whether the myth is that “Ai will make better decisions for us” (Myth 4), or that “governance structures only need light tweaks” (Myth 7), or that “more information leads to more clarity” (Myth 6), the common danger is complacency. It is the story we tell ourselves to delay the new, uncomfortable, necessary work.
Which raises the questions: What is this new work? And if Ai is supposedly overhyped, what do credible experts and institutions say about its trajectory? And how can leaders best prepare?
If Ai were merely hype, the people building it, and those advising business leaders and policy-makers, would be downplaying the risks. They’re not. Some are ringing bells while others are outright sounding alarms on governance, policies (or lack thereof), infrastructure, scale, and impact.
Taken together, these aren’t voices of complacency, they signal an uncommon, if not unprecedented level of individual, organizational, and societal risk: risks of social unrest, infrastructure frailty, capital distortions, geopolitical strains, incentive mismatches, and ethical and legal failings. The challenge isn’t whether Ai is appropriately “hyped”. It’s a question of whether or not it’s being appropriately dealt with.
Forecasts aren’t destinies, but they do help identify what likely lies ahead. Used wisely, they help clarify priorities, illuminate risks, and highlight opportunities. Collectively, they reveal the scale, pace, and equity implications of Ai’s economic arc.
McKinsey: Gen-AI could add $2.6–$4.4T annually globally across 60+ use cases.
Goldman Sachs: AI may lift global GDP by ~7% and raise productivity by 1.5 percentage points over a decade.
World Economic Forum (Future of Jobs 2025): By 2030, +170M jobs created, –92M displaced; 39% of skill sets destabilized; 63% of employers cite skills gaps.
IMF: 40% of global jobs exposed to AI; 60% in advanced economies, with substitution pressures depressing wages.
OECD: A third of vacancies are in high-exposure occupations; 60% of workers fear job loss; 40% expect wage cuts.
Lightcast: Non-IT AI postings surged 9× since 2022; 51% of roles now outside IT.
Burning Glass Institute: Many firms promote skills-based hiring, but few implement it, revealing a stark rhetoric-to-reality gap.
Deloitte (2025): Calls for reinvention of learning and development, not just tool adoption, to keep pace with AI shifts.
Revelio Labs (July 2025): June saw a 7% month-over-month decline in active postings. Entry-level demand has fallen 50% since 2022. White-collar postings dropped 12.7% YoY, outpacing blue-collar declines, while executive pay has widened the gap with middle managers and frontline workers.
Interpret these numbers as you deem appropriate. For me, the message is clear: Ai's disruption isn’t a myth or overstated. It's well underway; and the warning lights are blinking, thus without proactive design and deliberate leadership, the potential upside can just as easily collapse into strain, inequity, and unrest.
At its core, Ai is an unprecedented disruptor. Alongside its extraordinary upside, it carries profound risks: labor displacement, widening inequality, and potential social unrest. Leaders can no longer afford to treat workforce planning as a back-burner, talent acquisition function. It must now be a strategic imperative – holistic, continuous, and actionable.
The question is no longer if disruption is rippling through organizations and labor markets. It is: how severe will it be, how fast will it accelerate, and how prepared are leaders, organizations, and individuals to meet it?
Some perspectives on the direction and degree of potential unemployment:
These mainstream forecasts, in my view, are conservative. Personally, my thinking aligns more with Gawdat, Amodei, Jones, and Tasci: we may be heading toward double-digit unemployment within the next two to three years. And as I noted earlier, for those unfamiliar with history, this is not unprecedented. From the Great Depression to the labor dislocations of industrial automation, sharp employment shocks have always carried heavy social costs.
But unemployment rates only tell part of the story. Rising underemployment and widening gaps between wage growth and cost of living point to deeper structural strain. The warning lights are flashing. What leaders choose to do now will determine whether Ai becomes a force for resilience and renewal, or for fracture and unrest.
If the risks are real and the warning lights are flashing, the question becomes: how should leaders respond? Forecasts show what might happen. Design principles show how to prepare. Throughout this series we’ve emphasized the need for human-centered futures (Myths 1–3), skills ecosystems (Myths 6 & 8), and ecosystems of responsibility (Myth 7). Bringing those threads together, here are six design imperatives for leaders in the age of Ai:
These principles aren’t just about Ai and technology, they’re about consciously managing ecosystems of people, skills, networks, energy, decision-making, and trust. Such ecosystems have been the throughline of this entire myth series, highlighting that when leaders design deliberately and systematically, organizations adapt and flourish. When they don’t, myths take over and inertia thwarts progress.
Ai isn’t overhyped. It’s under-understood and under-designed for. The future of work with Ai will not just “work itself out”. It will be designed, or it will be drifted into. Those who act with conscious design will have a compounding advantage, and those who don’t will accelerate toward irrelevance.
What’s at stake now is larger than quarterly results. It is the blueprint for the next century of work. Just as prior generations of leaders shaped labor rights, globalization, and digital transformation, this generation must redesign work, organizations, and societies for the age of Ai. This responsibility cannot be outsourced to markets, algorithms, or regulators alone. It belongs to executives, boards, policymakers, educators, and individuals... all of us.
As Angela le Mathon recently emphasized: “Leaders must recognize that Ai isn’t just a technical disruption, it’s a human one. If organizations don’t align their strategies with how people actually experience work, they will not only fall behind, they will lose trust, talent, and legitimacy.”
For leaders, the call is clear:
The organizations that treat Ai not as a bolt-on tool but as a systemic design challenge will set the standard for resilience, trust, adaptability, and enduring value. Everyone else will inherit a future they did not choose.
So please, leaders, design work, your organizations, your employee experiences deliberately, systemically, and humanely. Doing so will not only benefit your customers and shareholders, it will strengthen the societies and human systems we all share.
To learn more about the other Myths click here. And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn. Finally, be sure to subscribe to the Future of Work Advisors Newsletter.