Myth 7: Governance Only Needs Light Tweaking

To maintain adaptive organizations, boards and ELT's must improve learning, scenario planning, and decision-making to account for the continuously shifting human-Ai interaction.


By Al Adamsen  |  Future of Work Advisors

The belief that governance structures only need minor modifications to remain effective is comforting, but it’s also wrong. Most executive and board decision-making models were built for a slower, more linear world. Today’s strategic landscape – shaped by Ai, robotic automation, global talent markets, ESG regulations and standards, geopolitical volatility, information availability, cybersecurity, and shifting stakeholder expectations – demands governance that is more agile, systemic, evidence-based and, as important as anything, human-centric.

The cost of clinging to legacy models and processes isn’t merely inefficient, it’s irresponsible. Risks include compromising business continuity, slower opportunity capture, and missed chances to avoid harm. As the 2025 NACD Board Oversight Survey makes clear, while 62% of boards now devote time to Ai on their agendas, only 36% have adopted an Ai governance framework, and just 27% have updated committee charters to integrate Ai oversight.  This reality underscores that while boards are paying lip service to Ai, governance mechanisms remain underdeveloped and often ill-suited to deal with current realities.

At the heart of modern governance is not simply responding to today’s challenges, but creating organizations that remain adaptable over time, and this must include clearly understanding and acting upon the human-Ai connection.

|  The Trap of the Familiar

It’s understandable why leaders resist wholesale change. When they do act, they often prefer measured, incremental adjustments. Sometimes such adjustments suffice. Other times, they amount to little more than window dressing.

Now, however, when it comes to governance and executive leadership team (ELT) decision-making, incrementalism falls short. The committees, charters, approval gates, and reporting rhythms that worked five/ten years ago are likely no longer appropriate. Ai, in particular, is reshaping the very nature of work, thus it’s impacting workforce size, skill demands, organizational design, technology requirements and, at times, business models themselves. It is also amplifying the amount of information leaders must consider. Yep, it’s complex.  It’s a complex system.  So, if a leader doesn't want to step into the complexity – if they just want simple, easy to understand concepts – then they’re probably not in the right job.  

The reality of this complexity means that boards and ELTs can’t simply bolt Ai topics onto yesterday’s governance structures. They must reassess: who’s involved, at what frequency, for what purpose, what information is going to be considered, etc. They need a wider, cross-functional, integrated lens, one that’s committed to building enduring value and trust.

Recent analyses reinforce this point: treat Ai as an enterprise-wide governance issue, not an IT project.  The 2025 Deloitte Global Boardroom Program found that nearly one-third of boards still don’t have Ai on their agenda, and among those that do, one in three remain dissatisfied with the time spent on it.  In other words, many boards acknowledge Ai, but few are yet governing it with the depth and frequency required.

|  Beyond Technology: Governing Human–Ai Collaboration

When discussing Ai, one of the most concerning trends is that many boards and ELTs remain narrowly focused on technology, cybersecurity, or potential productivity gains. These conversations are necessary, but they are not sufficient… not even close.

What’s often missing is adequate attention to how humans and Ai will work together, and how individual, team, and organizational effectiveness can be optimized in this new reality. This requires the voices of experts: those trained in human behavior and social systems like Industrial & Organizational (I/O) Psychologists, Economists, Sociologists, and the like. It also requires boards to demand timely, relevant, and actionable workforce insights: skills, activities, perspectives, ideas, sentiment, etc. – not just headcount and employee “engagement”.

As Dr. Solange Charas demonstrates in her work on Human Capital ROI (HCROI), people investments are material to enterprise value and must be treated as such; and as Dr. Beverly Tarulli’s research in Strategic Workforce Planning (SWP) highlights, boards need to explicitly connect strategy with talent, skills, and organizational design, areas where Ai will reshape cost, risk, and agility. Together, Charas and Tarulli’s insights point to a new imperative: boards cannot govern their organizations responsibly without also measuring and understanding the impacts of Ai on humans, and the humans impact on Ai.

|  Legacy Governance = Rising Risk

Despite growing recognition of Ai’s transformative potential, most boards and executive teams are still catching up in terms of governance. The structures that once felt sufficient – committee charters, reporting cadences, oversight checklists – now leave critical gaps. Research shows that many organizations are still normalizing basic oversight, exposing themselves to unnecessary risks.

  • Ownership of Ai governance is unclear. According to NACD, nearly 30% of companies report no assigned committee or full-board ownership for Ai, and 44% haven’t placed Ai on any board agenda at all.

  • Technology oversight must widen. NACD’s Blue Ribbon Commission urges boards to “drive both value and trust through technology,” looking beyond generative Ai to the broader tech stack and its operating risks.

  • Risk frameworks exist, but are underused. The NIST Ai Risk Management Framework (Ai RMF 1.0) and ISO/IEC 42001 provide detailed roadmaps for trustworthy Ai oversight, yet adoption remains low.  According to the July 2025 Trustmarque Report:
    • 93% of organizations report using Ai in some capacity.
    • Only 7% have fully embedded governance frameworks.
    • Just 8% have integrated Ai governance into their software development lifecycle (SDLC).
    • Only 18% have continuous monitoring via KPIs, and very few involve legal, ethics, or HR in Ai decisions.

This demonstrates a stark reality: while Ai is ubiquitous, structured governance is rare.  Incremental tweaks won’t close the distance between today’s oversight practices and tomorrow’s needs. This brings us to the deeper reasons why so many governance approaches are falling short.

|  Why “Light Tweaks” Fail Now

It’s tempting to believe that adjusting meeting agendas, adding Ai as a standing item, or commissioning the occasional audit is enough to modernize governance. Yet the very nature of decision-making is being reshaped.  Some recognize this.  Others don’t.  Why modest adjustments won’t work:

  1. New inputs and outputs. Decisions increasingly rely on probabilistic models supported by dynamic data supply chain and third-party data ecosystems. Traditional controls (policy memos, quarterly reviews) aren’t enough. Boards need explicit governance of models, data, vendors, and related processes.

  2. People/Humans are, in fact, critical and strategic. Ai is impacting work, roles, skills, learning, workflows, org design, culture, and more. Treating these as downstream HR issues creates execution and reputational risk, as well as the risk of compromising business continuity and competitive advantage over time.

  3. Speed + complexity. Governance must enable rapid, responsible decision-making through scenario planning and pre-defined decision rights.  This means annual scenario planning just isn’t enough, and the scenario planning needs to consider the ongoing human-Ai interaction.

  4. Trust as an outcome. Oversight must verify the trustworthiness of information, curate the “right” information, seek ideas from those closest to the work, acknowledge the presence of bias, and communicate to stakeholders in frequent, forthright, confidence-inspiring ways.

If traditional approaches no longer suffice, where should boards and ELTs look for inspiration? Fortunately, a few of us that focus on governance, leader decision-making, and workforce planning have already mapped out new ways forward.

|  What Good Governance Looks Like Now

If legacy models create risk and incremental tweaks fall short, what does effective governance look like now? The answer isn’t more bureaucracy or endless oversight layers. Instead, it’s governance redesigned to be adaptive, integrated, and centered on both technology and people. Future-ready governance blends clear mandates, cross-functional accountability, and scenario planning into a framework that can evolve as conditions change.

  1. Clearer mandates. Assign explicit Ai and technology oversight to a board committee, with full-board updates at least quarterly.

  2. Decision rights by design. Use structured models (e.g., AWARE, RAPIDS) to clarify ownership for Ai-impacted decisions.

  3. Human-capital integration. Track HCROI, skills velocity, employee experience, relationship equity, etc. alongside financials.

  4. Operationalizing trust. Align with NIST Ai RMF and ISO 42001 to create transparent, auditable processes.

  5. Scenario planning cadence. Make futuring and scenario planning quarterly board practices, not annual workshops.

  6. Internal Ai Review boards. Charter cross-functional review boards to govern Ai applications and their impacts on people, process, culture, data security, and financials.

These practices are not theoretical, they’re actionable steps that boards and executive teams can implement now.  Yet, even the strongest models require practical tools, checkpoints, aligned mindsets, and processes to keep leaders aligned. 

|  Final Takeaway: Adaptability Above All

Governance doesn’t need “light tweaking.” It requires redesign, redesign that treats technology and humans as inseparable drivers of value and trust.

Boards and ELTs that build adaptability into their governance through clearer mandates, cross-functional integration, human capital metrics, and scenario planning, will not just avoid unnecessary risks, they’ll build and maintain resilient, adaptable organizations.  They'll create and sustain organizations that learn and adapt at speed, at scale, and in sustainable ways – non-negotiables in the age of Ai and ongoing disruption.

 


 

To learn more about the other Myths click here.  And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn.  Finally, be sure to subscribe to the Future of Work Advisors Newsletter.

Similar posts

Navigating the Future of Work & Workforce Intelligence

Stay ahead with insights on the evolving world of work, leadership, and workforce analytics. Explore emerging trends, evidence-based strategies, and the impact of AI and technology on people, teams, and organizations.