If Ai gets it wrong, who’s accountable, you or the Ai? One of the most persistent myths among leadership teams today is this: if Ai is synthesizing vast amounts of information, finding patterns, producing insights, measuring and managing workflows, and making recommendations, it must also be helping us make better decisions.
This is an understandable conclusion or, more likely, an understandable hope. The amount of data is overwhelming, time is scarce, and we gotta keep moving, thus trusting a confident, assertive Ai is an enticing thing to do. However, if this is done, who’s really in charge?
At a lunch recently, an accomplished CHRO shared that she felt that she was now collaborating, and sometimes competing, with another executive, one that was really smart, yet often didn’t have the historical context, nor the context of the market and strategic direction of the company. The Ai-executive certainly had the “ear” of many of her peers, and it certainly provided insights and ideas at a superhuman pace, yet what to do with the Ai’s insights and ideas had become the challenge. Should the Ai outweigh her perspective and ideas, particularly as they related to the workforce, culture, and HR processes? Should the Ai’s insights and ideas outweigh the perspectives and ideas of the other members of the executive leadership team?
You can follow Ai’s lead and be led, or you can lead the process and lead Ai. That’s the choice. In other words, ensure your strategic decision-making processes, especially those that are scenario planning prospective futures, remain human-centric.
Why? Because here’s the truth: better information does not guarantee better decisions. Good information -- by definition, information that's timely, relevant, accurate, and actionable -- even if augmented by Ai, still takes time and effort to produce. It also takes time and effort to consider: context-setting, relevance, gaps, leverage points, etc. Yes, Ai can certainly help uncover patterns, surface insights, and reduce cognitive load at an unprecedented speed, yet this doesn't eliminate the need for human judgment, especially when decisions impact people (employees, customers, partners, etc.), strategy, and long-term organizational performance. If anything, it makes human judgment more valuable.
As a leader, you’ve undoubtedly been there too: too many dashboards, too many reports, too many “insights.” And now Ai has joined the mix, delivering even more at a pace our human brains can’t possibly absorb. Ai isn’t solving information overload, it’s turbocharging it.
This was true 20+ years ago, and was a key feature in many books, including Nate Silver’s Signal in the Noise. Leaders don’t lack enough data, they’re drowning in it; and this is all the more true because the access to it is so much faster and, oftentimes, so much more credible (sounding, looking, etc.). The result? Cognitive overload, collaborative friction, and poor decisions made under the illusion of clarity and, in turn, confidence. Unfortunately, that “clarity” is often just noise in disguise. Sound familiar?
Think about the last major decision your leadership team made. Was it a rushed, single moment in time? Or was that decision part of an ongoing process that considered the perspectives and ideas of a cross-functional team? Hopefully the latter, yet as I continue with the Future of Work Project and talk with executives, all agree, without exception, that there’s room for improvement, often significant improvement. This is especially true given the pace and magnitude of change, as well as the need to better consider Ai-generated insights and ideas that pertain to the workforce and the very nature of work itself.
Given this, decisions, particularly key, strategic decisions, need to be part of an ongoing, well-understood, well-respected, cross-functional process. While this, arguably, has always been a leading practice, it’s now all but essential. Who’s in “the room”. What information is being consumed? At what frequency? For what purpose? Who’s accountable? How are debates managed? What mental models or frameworks are used to align the team? All of these factors influence how information is consumed and considered and, in turn, the quality of the decisions; that is, the likelihoods the decisions reap the desired return.
Ai can assist in many of these steps, but it cannot yet replicate the subtle dynamics of trust, psychological safety, ethical discernment, or political navigation that shape real-world decisions. Nor should it. As my colleagues and I are frequently emphasizing: in an age of Ai, human judgment becomes even more valuable, not less.
So what’s the opportunity? Stop assuming that more Ai equals better decisions. Instead, design systems and processes where Ai augments human judgment.
Leaders can start by:
Designing an ongoing, cross-functional decision-making process that has a clear purpose, clear roles and responsibilities, and clear plan to consumer and consider information.
Creating and holding safe places for diverse perspective ideas to be considered and explored. Stay away from oversimplifying and quick quick takes.
Surfacing cognitive biases and persistent myths and work to actively counterbalance them with timely, relevant, and actionable information.
Developing shared frameworks and mental models that clarify when to involve Ai, when to trust it, when to question it, and when to slow down to simply learn and explore.
Prioritizing human-tech integration in decision-making. Let Ai do the pattern recognition and analysis, while humans focus on collaborative sense-making, alignment, and leading the organization through change.
And yes, this means creating the time and space to think about, design, and implement the leader decision-making process itself. To consider scenarios. To imagine implications. To build alignment. In short, to lead.
Ai can help leaders make better decisions, but it doesn’t guarantee better decisions. It’s a collaborator, not a substitute. Knowing your relationship to this collaborator, both individually and part of a leadership team, needs to be well understood. Boundaries need to be set, as does clarity on when and how to involve it.
Again, in an age of Ai, human judgment is not less important; it’s more important than ever. That’s not me just advocating for leaders. It’s me advocating for the organization and its stakeholders. For now, this is the reality.
So, going back to the start; the next time Ai hands you an “answer,” will you accept it and make it “your” decision, or will you lead it?
To learn more about the other Myths click here. And to learn how to assess your organization’s adaptive readiness and, in turn, build executive decision-making processes rooted in timely, relevant, and actionable insight, follow and connect with me here on LinkedIn. Finally, be sure to subscribe to the Future of Work Advisors Newsletter.