Myth 10: Ai Is Overhyped

Written by Al Adamsen | Aug 25, 2025 10:04:48 PM

By Al Adamsen  |  Future of Work Advisors

| Myth 10: Ai is Overhyped

Nope. Ai is not overhyped. If anything, it is under-understood and under-planned for. And “hype” isn’t the right word. This is serious. Ai is already reshaping society, economies, and industries more fundamentally than most of us are appreciating. It’s affecting personal finances, careers, and livelihoods. It’s also altering organizations, business models, and simply how work gets done.

I don’t pretend to know the full extent of what’s coming, or the timeline. What I do know is that most leaders, and most leadership teams, aren’t approaching the challenges and opportunities with a big enough lens. Too often they’re simply viewing Ai as a tool to improve efficiency and effectiveness, rather than a force that demands systemic redesign.

This is why I appreciate Yuval Noah Harari’s framing of Ai as Alien Intelligence.  It denotes a more powerful force that must be dealt with as an enduring relationship, with respect, thoughtfulness, and nurturing.  Instead, many are being held back by inertia, resistance that drastically elevates risk.  Leaders must thus break from that steady state and deliberately reimagine business models, work, and incentives and, as important, redefine what a workforce is in the age of Ai, AGI, and self-evolving systems.

Just months before 9/11, a friend of mine – a PhD historian like Dr. Harari – shared something profound that has stuck with me to this day: “Al, most Americans don’t really get history. They haven’t lived through it. They’ve had it easy. Wars, depression, systemic oppression… these aren’t real to them. They’re abstract concepts in movies where characters overcome them on the way to a happy ending. But history hasn’t ever worked that way.” Even if I’m not recalling the words exactly, the essence is there.

Carl Sagan offered a similar warning in The Demon-Haunted World (1995):

“I have a foreboding of an America in my children’s or grandchildren’s time when the United States is a service and information economy, when nearly all of the manufacturing industries have slipped away… when awesome technological powers are in the hands of a very few… when the people have lost the ability to set their own agendas or knowledgeably question those in authority… we slide, almost without noticing, back into superstition and darkness… the 30-second sound bites… lowest-common-denominator programming… especially a kind of celebration of ignorance.”

Sagan’s point was stark: a society with extraordinary technology but without scientific literacy, skepticism, or grounding in truth is not just at risk, but on a combustible path.  The same is true for organizations.

 

The Hidden Sub-Myth: “Everything Will Just Work Itself Out”

Here lies arguably the most dangerous myth: the belief that everything will just work itself out. It’s comforting, of course. It absolves leaders of responsibility. It suggests that markets, “the invisible hand”, or the simple passage of time will sort out the hard parts: inequity, unemployment, reskilling, energy strain, social stability.

History tells us otherwise. From sweatshops in the industrial revolution, to mass unemployment in the Great Depression, to financial crises that destroyed jobs and livelihoods, the times when leaders hoped “it will just work out” were the times of greatest human cost and, related, greatest organizational and societal cost.

This is the connective tissue across all the myths in this series. Whether the myth is that “Ai will make better decisions for us” (Myth 4), or that “governance structures only need light tweaks” (Myth 7), or that “more information leads to more clarity” (Myth 6), the common danger is complacency. It is the story we tell ourselves to delay the new, uncomfortable, necessary work.

Which raises the questions: What is this new work?  And if Ai is supposedly overhyped, what do credible experts and institutions say about its trajectory? And how can leaders best prepare?

 

What the Ai Experts Are Saying

If Ai were merely hype, the people building it, and those advising business leaders and policy-makers, would be downplaying the risks. They’re not.  Some are ringing bells while others are outright sounding alarms on governance, policies (or lack thereof), infrastructure, scale, and impact.

  • Geoffrey Hinton (“Godfather of Ai”): In an April 2025 interview, Hinton shortened his AGI (Artificial General Intelligence) timeline to between 4 and 19 years, warning that Ai agents capable of acting in the world are more perilous than those limited to answering questions. He also recently stated there's a 10–20% chance that Ai could enable human extinction within the next three decades.

  • Meredith Whittaker (Signal Foundation): Has consistently critiqued Ai hype as a facade masking entrenched surveillance-capitalist incentives. She calls for democratic and equitable tech futures and warns that unchecked tech power risks reinforcing structural inequities.

  • Jensen Huang (NVIDIA): At London Tech Week this past June, Huang emphasized that “programming Ai is like programming a person”, and the new “human” language of Ai makes everyone a programmer. He called Ai the “great equalizer”, noting that anyone can now build with it, even without coding expertise. He added: “If you’re not using Ai, you’re going to lose your job to someone who does.”

  • Sam Altman (OpenAi CEO): recently highlighted how Ai agents will start joining the workforce this year (2025), fundamentally reshaping how companies operate. He reassured that while new jobs will emerge, and some will vanish, the younger generation, Gen Z, is well-positioned to thrive. His real concern lies with older professionals resisting reskilling.
  • Eric Schmidt (former Google CEO): Testifying before Congress this past April, Schmidt warned that Ai’s electricity demand could reach 99% of total generation, equating to an additional 90 gigawatts over the next 3–5 years, or as much as all U.S. nuclear capacity combined, unless the grid is modernized at pace.
  • Mo Gawdat (ex–Google Executive): On the Diary of a CEO podcast warned of a looming “short‑term dystopia” by 2027, driven by mass unemployment, social unrest, and collapse of the middle class if organizations and governments fail to redesign work systems. He dismissed the idea that “Ai creating jobs” is anything but a myth, and cautioned that roles from podcasters to CEOs are vulnerable to full automation.

Taken together, these aren’t voices of complacency, they signal an uncommon, if not unprecedented level of individual, organizational, and societal risk: risks of social unrest, infrastructure frailty, capital distortions, geopolitical strains, incentive mismatches, and ethical and legal failings. The challenge isn’t whether Ai is appropriately “hyped”.  It’s a question of whether or not it’s being appropriately dealt with.

 

What Respected Forecasts Suggest

Forecasts aren’t destinies, but they do help identify what likely lies ahead. Used wisely, they help clarify priorities, illuminate risks, and highlight opportunities. Collectively, they reveal the scale, pace, and equity implications of Ai’s economic arc. 

  • McKinsey: Gen-AI could add $2.6–$4.4T annually globally across 60+ use cases.

  • Goldman Sachs: AI may lift global GDP by ~7% and raise productivity by 1.5 percentage points over a decade.

  • World Economic Forum (Future of Jobs 2025): By 2030, +170M jobs created, –92M displaced; 39% of skill sets destabilized; 63% of employers cite skills gaps.

  • IMF: 40% of global jobs exposed to AI; 60% in advanced economies, with substitution pressures depressing wages.

  • OECD: A third of vacancies are in high-exposure occupations; 60% of workers fear job loss; 40% expect wage cuts.

  • Lightcast: Non-IT AI postings surged 9× since 2022; 51% of roles now outside IT.

  • Burning Glass Institute: Many firms promote skills-based hiring, but few implement it, revealing a stark rhetoric-to-reality gap.

  • Deloitte (2025): Calls for reinvention of learning and development, not just tool adoption, to keep pace with AI shifts.

  • Revelio Labs (July 2025): June saw a 7% month-over-month decline in active postings. Entry-level demand has fallen 50% since 2022. White-collar postings dropped 12.7% YoY, outpacing blue-collar declines, while executive pay has widened the gap with middle managers and frontline workers.

Interpret these numbers as you deem appropriate.  For me, the message is clear: Ai's disruption isn’t a myth or overstated.  It's well underway; and the warning lights are blinking, thus without proactive design and deliberate leadership, the potential upside can just as easily collapse into strain, inequity, and unrest.

 

The Potential for Strain & Unrest

At its core, Ai is an unprecedented disruptor. Alongside its extraordinary upside, it carries profound risks: labor displacement, widening inequality, and potential social unrest. Leaders can no longer afford to treat workforce planning as a back-burner, talent acquisition function. It must now be a strategic imperative – holistic, continuous, and actionable.

The question is no longer if disruption is rippling through organizations and labor markets. It is: how severe will it be, how fast will it accelerate, and how prepared are leaders, organizations, and individuals to meet it?

Some perspectives on the direction and degree of potential unemployment:

  • Dario Amodei (Anthropic CEO): Predicts Ai could eliminate 50% of entry-level white-collar jobs, spiking unemployment to 10–20% within 1–5 years, unless planned for.

  • Paul Tudor Jones (risk manager): Echoed that prediction: white-collar graduate unemployment at 5.8% already, could escalate to 10–20% soon.

  • JPMorgan’s Murat Tasci: Warns of a “jobless recovery” hitting knowledge workers hard, representing ~45% of U.S. households.

  • Diane Swonk (KPMG): The U.S. is in a “messy, disruptive transition.” Ai is the only major offset to slowing growth, but its productivity benefits are delayed. This lag means leaders face turbulence now, before Ai’s upside is fully realized.
Mainstream agencies:

    • CBO: Foresees unemployment rising into 2026.

    • Conference Board: Predicts ~4.5% jobless rate by 2026.

    • University of Michigan RSQE: Projects ~4.9% mid-2026.

    • Deloitte US Outlook: Around 4.6% for 2026.

These mainstream forecasts, in my view, are conservative. Personally, my thinking aligns more with Gawdat, Amodei, Jones, and Tasci: we may be heading toward double-digit unemployment within the next two to three years. And as I noted earlier, for those unfamiliar with history, this is not unprecedented. From the Great Depression to the labor dislocations of industrial automation, sharp employment shocks have always carried heavy social costs.

But unemployment rates only tell part of the story. Rising underemployment and widening gaps between wage growth and cost of living point to deeper structural strain. The warning lights are flashing. What leaders choose to do now will determine whether Ai becomes a force for resilience and renewal, or for fracture and unrest.

 

Design Principles for the Future of Work

If the risks are real and the warning lights are flashing, the question becomes: how should leaders respond? Forecasts show what might happen. Design principles show how to prepare. Throughout this series we’ve emphasized the need for human-centered futures (Myths 1–3), skills ecosystems (Myths 6 & 8), and ecosystems of responsibility (Myth 7). Bringing those threads together, here are six design imperatives for leaders in the age of Ai:

  1. Work as a product.
    Treat work the way product teams treat software: as something that is continually versioned, tested, and improved. Stand up a Work Design Guild to map tasks across humans, Ai, data, and automation, then iterate as strategy and technology evolve.


  2. Skills as substrate.
    Skills, not job titles, are the true unit of value. Build a living skills graph tied to task catalogs so individuals and organizations can adapt in real time. This keeps people employable, leaders informed, and organizations resilient.


  3. Adoption as transformation.
    Ai isn’t a plug-in. It reshapes processes, decision rights, responsibilities, and work itself. Treat Ai adoption as an enterprise-level change program: redesign workflows, retool data governance, and invest in learning and leadership, not just in licenses.


  4. Co-creation as trust.
    The fastest way to erode trust is to “drop” Ai on employees. The fastest way to build it is to involve them. Invite teams into task audits, workflow design, guardrail creation, and scenario planning. This accelerates adoption, strengthens capability, and signals respect.


  5. Infrastructure as strategy.
    Ai runs on energy, compute, and latency. These aren’t back-office concerns, they’re strategic variables. Leaders must align their Ai roadmaps with energy availability, grid capacity, sustainability goals, and resilience planning.


  6. Governance as architecture.
    Governance isn’t about slowing Ai down, it’s about building decision-making processes that mitigate risk and increase the probability that investments reap the desired return. Stand up an Ai or Work Design Review Board spanning operations, IT, IS, HR, legal, inclusion, and risk. Measure not just ROI but also equity, safety, bias, and workforce and business impacts.

These principles aren’t just about Ai and technology, they’re about consciously managing ecosystems of people, skills, networks, energy, decision-making, and trust. Such ecosystems have been the throughline of this entire myth series, highlighting that when leaders design deliberately and systematically, organizations adapt and flourish. When they don’t, myths take over and inertia thwarts progress.

 

Conclusion

Ai isn’t overhyped. It’s under-understood and under-designed for. The future of work with Ai will not just “work itself out”.  It will be designed, or it will be drifted into. Those who act with conscious design will have a compounding advantage, and those who don’t will accelerate toward irrelevance.

What’s at stake now is larger than quarterly results. It is the blueprint for the next century of work. Just as prior generations of leaders shaped labor rights, globalization, and digital transformation, this generation must redesign work, organizations, and societies for the age of Ai. This responsibility cannot be outsourced to markets, algorithms, or regulators alone. It belongs to executives, boards, policymakers, educators, and individuals... all of us. 

As Angela le Mathon recently emphasized: “Leaders must recognize that Ai isn’t just a technical disruption, it’s a human one. If organizations don’t align their strategies with how people actually experience work, they will not only fall behind, they will lose trust, talent, and legitimacy.”

For leaders, the call is clear:

  • Design/Redesign work. Don’t assume the human–Ai connection; measure and manage it.

  • Co-create with people. Don’t dictate. Invite employees into shaping the new.

  • Measure what matters. Go beyond legacy metrics to reflect what’s actually happening.

  • Balance outcomes with process. Focus not just on results, but on how those results are achieved.

The organizations that treat Ai not as a bolt-on tool but as a systemic design challenge will set the standard for resilience, trust, adaptability, and enduring value. Everyone else will inherit a future they did not choose.

So please, leaders, design work, your organizations, your employee experiences deliberately, systemically, and humanely. Doing so will not only benefit your customers and shareholders, it will strengthen the societies and human systems we all share.

 

 

To learn more about the other Myths click here.  And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn.  Finally, be sure to subscribe to the Future of Work Advisors Newsletter.