By Al Adamsen | Future of Work Advisors
| Myth 4: Ai Will Make Better Decisions for Us
If Ai gets it wrong, who’s accountable, you or the Ai? One of the most persistent myths among leadership teams today is this: if Ai is synthesizing vast amounts of information, finding patterns, producing insights, measuring and managing workflows, and making recommendations, it must also be helping us make better decisions.
This is an understandable conclusion or, more likely, an understandable hope. The amount of data is overwhelming, time is scarce, and we gotta keep moving, thus trusting a confident, assertive, and often affirmative Ai is an enticing thing to do. However, if this is done, and done on a regular basis, who’s really in charge?
At a lunch recently, an accomplished CHRO shared that she felt that she was now collaborating, and sometimes competing, with another executive, one that’s really smart, yet often didn’t have the historical context, nor the context of the market and strategic direction of the company. That Ai, that “Ai-executive” if you will, certainly had the attention of many of her peers, and it certainly provided insights and ideas at a superhuman pace, yet what to do with what the information it was producing soon became a challenge. Should the Ai outweigh her perspective and ideas, particularly as they related to the workforce, culture, and HR processes? Should the Ai’s insights and ideas outweigh the perspectives and ideas of the other ELT members? After all, it's "smarter", right?
| Lead Ai, Don’t Be Led by It
You can follow Ai’s lead and be led, or you can lead the process and lead Ai. That’s the choice. In other words, ensure your strategic decision-making processes, especially those that are scenario planning prospective futures, remain human-centric.
Why? Because here’s the truth: better information does not guarantee better decisions. Good information – by definition, information that's timely, relevant, accurate, and actionable – even if augmented by Ai, still takes time and effort to produce. It also takes time and effort to consider: context-setting, relevance, gaps, leverage points, etc. Yes, Ai can certainly help uncover patterns, surface insights, and reduce cognitive load at an unprecedented speed, yet this doesn't eliminate the need for human judgment, especially when decisions impact strategy, long-term organizational performance and, of course, people (employees, customers, partners, etc.). If anything, it makes human judgment even more valuable.

| The Illusion of Clarity
As a leader, you’ve undoubtedly been there too: countless dashboards, too many reports, too many “insights.” Now Ai has joined the mix, delivering even more information at a pace our human brains can’t possibly absorb. Ai isn’t solving information overload, it’s turbocharging it.
This was true 20+ years ago, and was a key feature in many books, including Nate Silver’s Signal in the Noise. Like back then, leaders of today don’t lack enough information, they’re drowning in it; and this is all the more true because the access to it is so much faster and, oftentimes, so much more credible (even when it’s not). The result? Cognitive overload, collaborative friction, and poor decisions made under the illusion of clarity and confidence. Unfortunately, that “clarity” is often just noise in disguise. Sound familiar?
| Decision-Making Is a Process, Not an Event
Think about the last major decision you and your leadership team made as a group.
- How confident were you in the information that informed the decision?
- Did you feel you had a good understanding of how the various scenarios would likely play out?
- Was the decision made in a rush without careful consideration of the relevant information, as well as the perspectives and ideas of fellow leadership team members?
- Or, was the decision part of an ongoing process that offered the ample time and space to consider relevant information as well as input from your fellow leaders?
Hopefully the latter, yet as I continue with the Future of Work Project and talk with executives all agree, without exception, that there’s room for improvement… often a lot of improvement. This is especially true given the pace and magnitude of change, as well as the need to better curate Ai-generated insights and ideas, especially those that relate to the workforce, culture, and the very nature of work itself.
Given this, decisions, particularly key, strategic decisions, need to be part of an ongoing, well-understood, well-respected, cross-functional process. While this has always been a leading practice, it’s now all but essential. Who’s in “the room”? What information is being consumed? At what frequency? For what purpose? Who’s accountable? How are debates managed? What mental models or frameworks are used to align the team? All of these factors influence how information is being consumed and considered and, in turn, impact the quality of the decisions.
Ai can assist in many of these steps, but it cannot yet replicate the subtle dynamics of trust, psychological safety, ethical discernment, or political navigation that shape real-world decisions; nor should it. As my colleagues and I are frequently emphasizing, and I'll do so again here: in an age of Ai, human judgment becomes even more valuable, not less.
| Better Decision-Making Systems, Not Just Smarter Tools
So what’s the opportunity? Stop assuming that more Ai equals better decisions. Instead, design systems and processes where Ai augments human judgment or, if you prefer, where human judgement augments Ai (to be returned to).
No matter the direction of the human-Ai relationship, leaders can start creating a better decision-making system by adhering to these five principles:
- Design an ongoing, cross-functional decision-making process that has a clear purpose, clear roles and responsibilities, and clear plan to consume and consider information.
- Create and hold safe places for diverse perspectives and ideas to be considered and explored. Stay away from oversimplifying and quick takes.
- Surface cognitive biases and persistent myths and work to actively counterbalance them with timely, relevant, and actionable information.
- Develop shared frameworks and mental models that clarify when to involve Ai, when to trust it, when to question it, and when to slow down to simply learn and explore.
- Prioritize human-tech integration in decision-making. Let Ai do the pattern recognition and analysis, while humans focus on collaborative sense-making, alignment, and leading the organization through change.
And yes, this means creating the time and space to think about, design, and implement the leader decision-making process itself: to consider scenarios, imagine implications, and build alignment. In short, to lead.
.png?width=1200&height=628&name=AWARE%20Framework%20Adamsen%202508%20(1).png)
| The AWARE Framework: Leading with Sound Judgment
In line with the five principles, if better decisions depend on a better process, then he’s a start, the AWARE Framework: Assess – Weigh – Act – Reflect – Evolve.
At its core, AWARE is not a checklist, but an interactive process meant to consciously create and iterate strategic decisions over time:
- Assess – the current state, future state, and gaps. Compile relevant information, apply the appropriate context, identify the risks, leverage points, timeline, etc.
- Weigh – plan and get going. explore how to get from where you are to where you want to go. Weigh the options. Run scenarios. Make decisions.
- Act – formulate a plan on how to enact the decisions. Ensure measures are in place to learn and adjust along the way. Get going. Learn. Iterate.
- Reflect – dispassionately observe what’s occurred. Learn and align as a group. Reimagine a better way forward. Keep what’s working. Change what’s not.
- Evolve – implement the necessary changes, and celebrate, leverage, and expand what’s working. Balance change with stability and predictability.
This process provides leadership teams with a predictable rhythm, clarity on when to slow down and employ slow, deep thinking, and when to speed up to address imminent threats or take advantage of fleeting opportunities. Such a process provides a healthy learning system that mitigates the risk of false confidence, and helps ensure Ai serves as a tool for human judgment rather than a substitute for it.
Seen this way, the work of leading with Ai is not about chasing the smartest algorithm, but about cultivating the most adaptable learning systems, those those that keep human wisdom at the center. I’ll reference it again:
“We don’t rise to the level of our potential, we fall to the level of our systems.” - James Clear, author of Atomic Habits
| The Bottom Line
Ai can help leaders make better decisions, but it doesn’t guarantee better decisions. It’s a collaborator, not a substitute. Knowing your relationship to this collaborator, both individually and as part of a leadership team, needs to be well understood. Boundaries need to be set, as does clarity on when and how to involve it.
Again, in an age of Ai, human judgment is not less important; it’s more important than ever. That’s not me just advocating for leaders. It’s me advocating for the organization and its stakeholders, especially workers.
So, going back to the start: the next time Ai hands you an “answer,” will you accept it and make it “your” decision, or will you lead it? The leaders who consistently stay AWARE – Assess, Weigh, Act, Reflect, and Evolve – will not only make better decisions, they’ll build the kind of adaptive, human-centered systems that thrive in the age of Ai and ongoing disruption.
To learn more about the other Myths click here. And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn. Finally, be sure to subscribe to the Future of Work Advisors Newsletter.