Myth 8: More Ai and Tech = More Capacity and Productivity

Ai doesn’t automatically create capacity and productivity, it often adds work, fuels burnout, and drains focus. Leaders must thus redesign work, systems, and processes to see positive gains.


By Al Adamsen  |  Future of Work Advisors

Reality Check

If I’m known for anything within my personal circles, including as a father, it’s capacity: temporal, emotional, and cognitive. What can I handle? What can we handle? I’ve long been committed in my own life to create and maintain elasticity – the ability to stretch and move like a rubber band throughout the day – and if there’s no capacity, if the band is dry and tight, it’ll snap. No bueno.

This is what I’m seeing now with the rise of burnout, troubling stories of Ai use, and the stubborn myth that Ai (and tech in general) will almost automatically create more capacity and productivity. Recent studies confirm the concern: according to Forbes job burnout hit 66% in 2025, with Ai-related pressures like unrealistic productivity expectations making things worse. An Upwork Research Institute survey found 77% of employees say Ai has actually increased their workload, not reduced it. Add to this the impact of workforce reductions (RIFs), where remaining employees are expected to absorb more work under the assumption that Ai will pick up the slack, and the picture becomes clearer: Ai isn’t magically creating capacity. It’s often stretching human elasticity – our rubber bands – to their breaking point.

So let’s quiet the hype and be real, look-at-the-data real. Ai and new tools can certainly lift productivity, but only when their application is focused and respects the limits of our brains, business processes, and team cultures. Ai and tech are not magic wands.

Yes, Ai can streamline repetitive tasks and, in many cases, create things faster and better than ever before. But it also creates new work: training models, crafting prompts, building workflows, verifying outputs, fixing handoffs, reconciling data, and redesigning processes. Add fragmented tool stacks and “always-on” communication, and many leaders discover the opposite of what was promised: not more capacity and productivity, but less.

|   The Research: What’s Really Happening

Collaborative Overload & Microstress

Rob Cross and colleagues have shown that the proliferation of platforms, requests, and digital touchpoints often consumes more capacity than it frees. Add Ai – another relationship that involves periodic two-way communication – and hoped-for gains can quickly vanish.

Cross’s collaboration with Karen Dillon in The Microstress Effect makes this even clearer. They show how tiny, frequent, and compounding demands – like messages or Ai outputs that require checking, re-checking, or urgent responses – chip away at performance. Over time, these “microstresses” deplete emotional reserves, drain cognitive capacity, and weaken human-to-human relationships that provide meaning, energy, and resilience.

Context Switching & Attention Residue

Gloria Mark’s research at UC Irvine demonstrates that each time workers shift between tasks, apps, or channels, attention fragments. Recovery takes time, stress rises, and error rates climb. Instead of expanding bandwidth, too many tools and transitions shrink it.

Building on this, Sophie Leroy at the University of Washington coined the term “attention residue.” Her research shows that when people leave one task unfinished and jump to another, part of their attention stays tethered to the first. The result: lower performance, reduced creativity, and slower uptake on the new task. In short: every notification leaves behind cognitive “shrapnel” that compromises what comes next.

Gray Work & Digital Debt

The net effect is what Microsoft’s Work Trend Index calls digital debt – a backlog of messages, meetings, and apps that steals attention from meaningful work. Ai can certainly reduce some forms of labor, but it often creates new layers of “gray work” that leaders rarely account for. Productivity, therefore, isn’t automatic. It requires deliberate design to ensure technology lightens the load rather than amplifies it.

|   Why the Promise and the Pain Co-Exist

Despite the investment, surveys and telemetry show that many organizations still struggle to turn Ai adoption into impact. Why?

Cognitive load is real... and amplified.
Interruptions cost focus and quality. Humans aren’t CPUs; we don’t multitask seamlessly. Ai often displaces one form of work with another, less visible form of work; and with burnout levels already surging, each new layer of cognitive demand accelerates the slide.

Ai both frees and creates work.
Generative Ai offloads drafting and synthesis, but introduces “Ai-managerial labor”: specifying context, verifying outputs, stitching results across systems, monitoring edge cases, etc. Studies show knowledge workers now spend more critical thinking energy on verification and “task stewardship” than creation. Employees report that when these demands pile up without added support, burnout is the result.

Algorithms add friction when autonomy is lost.
Algorithmic management may optimize workflows, but research links it to reduced autonomy, higher burnout, and lower discretionary effort.

Networks matter more than tools.
Kristin Cullen-Lester and Greg Pryor in The Social Capital Imperative show that productivity depends on connections – who shares knowledge, where informal bridges exist, and how structural gaps are filled. Without activating social capital, Ai fractures teams instead of fueling them.

Productivity is social, not just technical.
Michael Arena’s Adaptive Space reminds us that value flows through networks. If Ai is bolted onto rigid structures, innovation stalls. Connect entrepreneurial pockets to execution networks, and tools amplify people instead of replacing them.

Angst is a signal, not a soft topic.
Drs. Erin Eatough and Shonna Waters’ research on workplace angst – feelings of insecurity, stagnation, and irrelevance – shows that many employees experience psychological threat as Ai reshapes roles. That threat response drains energy unless leaders redesign with human needs in mind or, better yet, in partnership with those humans.

The human edge must be intentional.
Tomas Chamorro-Premuzic argues that the real upside comes when Ai augments distinctly human strengths – judgment, empathy, and creativity – and when leaders redesign jobs and culture to use the time Ai gives back intentionally, instead of letting it be reabsorbed by other lower value tasks.

|   What To Do: A Leadership Playbook

  1. Measure how work gets done.
    Don’t stop at output metrics. Track tasks, activities, interruptions, tool overlap, time spent verifying Ai outputs – whatever’s appropriate for that specific job family. Treat these as first-class KPIs alongside throughput and quality.


  2. Simplify before you add.
    Rationalize apps and channels. Every new bot or tool must retire something or streamline a process. No net-adds that will compromise capacity.


  3. Budget for Ai-managerial labor.
    Make verification, prompt design, and integration explicit roles with time, training, and ownership – not hidden after-hours "glue work", which is artificial, unsustainable and, in my view, unethical.


  4. Protect autonomy and discretion.
    Algorithmic nudges may optimize workflows, but without transparency and human override they fuel burnout and reduce trust. Build choice into the system.


  5. Redesign networks, not just tasks.
    Create “adaptive space” by connecting explorers (who experiment with Ai), brokers (who translate insights), and executors (who scale them). Sponsor cross-team forums, hackathons, and rotations to move learning into the operating core.


  6. Lead the change like leaders.
    McKinsey’s latest Ai survey shows CEO-level oversight correlates with measurable impact. Treat Ai as an operating-model transformation, not just another software rollout.


  7. Design for whole humans.
    Capacity isn’t infinite. Guard it with meeting budgets, no-ping hours, and effort budgets. Recognize that sleep, health, relationships, and community are not “nice-to-haves” – they’re non-negotiable constraints on performance that need time, space, and nurturing.


  8. Co-create work redesign with employees.
    Don’t do Ai adoption to employees. Invite them into the redesign process so they can shape workflows, share ideas, voice concerns, and surface innovative uses of Ai. Co-creation not only improves adoption, it turns angst into agency, and helps prevent burnout.


  9. Build Ai literacy... or, better yet, fluency.
    Invest in broad Ai fluency so employees don’t just use the tools, but intentionally engage with them. Fluency enables both how to use Ai effectively and how to question its risks and limitations.


  10. Govern for trust.
    Move beyond compliance. Ensure governance frameworks make ethics, transparency, and fairness central to Ai use. Trust is the foundation for sustained adoption and, in turn, the hoped-for capacity and productivity gains.


|   Bottom Line

Ai doesn’t automatically “free up” time, it redistributes how time is used. Without conscious redesign of work, work processes, and supporting networks, the hoped-for capacity gains will be absorbed into verification, coordination, context switching, and other Ai-induced tasks. The real risk: overestimating short-term efficiency gains (which is what I’m seeing now) while underestimating, and underplanning for, the medium-term, seismic disruption that’ll really start to hit organizations in the fall of 2026, if not sooner. 

The leaders, and leadership teams, who will successfully navigate their organizations through these disruptive times, will do much more than adopt Ai, they’ll measure how work gets done, simplify systems, budget for the hidden costs of Ai-managerial labor, and not assume the Ai will magically create capacity and productivity. They’ll protect autonomy, redesign networks, and have a cross-functional team lead Ai adoption, not as an IT project, but as a means to continually improve how their organization gets work done. They’ll recognize that humans, while amazingly resilient, have finite capacities that must be safeguarded. These same humans must also be fully fluent with Ai, co-creators of new ways of working, and protectors of their own capacity and elasticity.

Finally, leaders must regard human constraints reflected in high angst and burnout not as a nuisance, but as urgent headlines that needs to be systematically and persistently addressed. With job burnout hitting 66% in 2025 and 77% of employees saying Ai has increased workload, leaders who ignore this risk will see it fracture their organization's culture, performance, and brand. Those who embrace it as an operating model design challenge will transform Ai from a stress inducer into the capacity and productivity unlock it’s meant to be.

 


 

To learn more about the other Myths click here.  And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn.  Finally, be sure to subscribe to the Future of Work Advisors Newsletter.

Similar posts

Navigating the Future of Work & Workforce Intelligence

Stay ahead with insights on the evolving world of work, leadership, and workforce analytics. Explore emerging trends, evidence-based strategies, and the impact of AI and technology on people, teams, and organizations.