Legitimacy Is the New KPI: Ethical AI Leadership for the C‑Suite
- Amii Barnard-Bahn

- Mar 2
- 6 min read

At Davos this year, I found myself in a small conference room just off one of the main corridors. Outside, cameras and reporters were clustered around the big AI announcements: productivity gains, new partnerships, projections of trillion‑dollar value. Inside, about a dozen of us—CEOs, a central banker, a chief risk officer—were huddled around a table with coffee that had gone cold.
What struck me was that we weren’t talking excitedly about those aforementioned big announcements. We were talking about the questions that worried these leaders.
One CEO, whose company employs more than 100,000 people, leaned forward and said quietly, “I’m less concerned about whether our models work. I’m concerned about whether my people will trust them.”
Another leader asked, “What happens to our legitimacy if an algorithm we approved hurts people? We’ll own the fallout, not the vendor.”
A third admitted, “Our board keeps asking for AI efficiency metrics. My bigger fear is losing the confidence of regulators and the public.”
As an executive coach to Fortune 500 CEOs, I spend most of my time helping leaders navigate the messy challenges that don't make headlines: board dynamics, leadership transitions, decision-making under pressure.
This year at Davos—both in my coaching conversations and through my role as Senior Executive Fellow at The Digital Economist—one question kept coming up: How do you maintain legitimacy as a leader when AI is making or influencing critical decisions? It's not a technology question. It's a leadership question.
In my own session, Architecting Intelligent Institutions: AI at the Core of Governance, Value, and Security, a theme kept surfacing beneath all the talk of innovation and disruption: in an AI‑driven institution, legitimacy—not efficiency—is now the most fragile and strategic asset you have.
You can buy models and computing power. You can outsource implementation. What you cannot outsource is whether employees, regulators, and the public still believe your decisions are fair, transparent, and aligned with your values. That is the new frontier of ethical AI leadership.
Why Legitimacy Is the New KPI in an AI World
When I coach executives on major change initiatives like navigating AI deployment, I define legitimacy as the collective belief among your stakeholders that you deserve the authority you hold. It’s the sense that your decisions are made fairly and consistently, that the reasoning behind them is understandable, that you are acting in line with your stated values, and that you are weighing long‑term stakeholder interests, not just this quarter’s numbers.
It goes far beyond technical accuracy. A highly accurate model can still undermine your organization if people do not believe you should be using it in the ways you are choosing to.
The 2025 Edelman Trust Barometer reinforces this point. Business remains the most trusted institution globally, but that trust is increasingly contingent on a blend of competence and ethics—on being both effective and values‑driven.
People expect CEOs not just to deliver financial performance, but to lead on technology, societal issues, and the responsible use of innovation. Doing things right is no longer enough; you must be seen as doing the right things.
AI magnifies this dynamic. When a human leader makes a decision, employees may disagree, but they can usually infer that judgment, context, and dialogue were involved. When an opaque system makes—or heavily influences—a decision, people often experience it as unexplainable, unchallengeable, and potentially unfair. That perception is a direct hit to organizational legitimacy.
Ethical AI leadership means closing that gap between technical capability and human trust.
3 Dimensions of Legitimacy You Must Protect As AI Scales
To make this practical for the executives I coach, I encourage leaders to think in terms of three interlocking dimensions of legitimacy: procedural, relational, and ethical. Each one matters. Neglect any of them, and your leadership may become faster in the short term, but more fragile, less trusted, and ultimately less effective.
Procedural legitimacy
Procedural legitimacy is about how AI‑supported decisions get made. This is your governance, decision rights, and transparency. This means defining what AI ethics means in your context, building ethics into product development, taking a lifecycle approach to bias, creating cross‑functional groups, and being transparent about how AI is used.
For a C‑suite leader, the core question is whether you can explain, in plain language, how your AI systems are approved, monitored, and held accountable.
If an employee, regulator, or journalist asks, “Why was this decision made?” Do you have a real answer beyond “the system said so”? Is it clear who is ultimately accountable when AI is involved in a bad outcome?
It’s important to remember: you cannot hold a system accountable. You have to hold humans accountable. Strong AI governance—cross‑functional oversight, risk assessments, documentation—is not bureaucracy for its own sake; it is how you signal to stakeholders that you take your responsibilities seriously.
Relational legitimacy
Relational legitimacy is about how people experience AI decisions. In coaching conversations with leaders in legal, risk, and HR, I hear consistent worries surfacing among employees. They wonder if AI will decide they are redundant, whether an algorithm will influence their performance ratings or promotion prospects, or if automated screening tools will treat their customers and suppliers fairly.
Your people don’t just evaluate the outcomes of AI; they evaluate how you talk about AI. Ethical AI leadership at the relational level means narrating the “why” behind each AI deployment: how it will help, where its limits are, and what it will not be used for.
It means creating real feedback loops: town halls, office hours, anonymous channels, and pulse surveys that explicitly invite questions and concerns. And it means preserving meaningful human touchpoints for high‑stakes decisions like termination, promotion, or discipline, so that people have the opportunity to be heard.
In my Compliance Week article on AI adoption in compliance, I described how tools that summarize policies or generate case studies can free up professionals to focus on more strategic work. In my coaching work, I see this play out constantly: the technical solution works, but the trust infrastructure around it doesn't.
Those benefits are real. But they only materialize when employees trust that these tools are being used to augment their judgment, not to replace it and when leaders demonstrate that concerns will be taken seriously, not dismissed as “resistance to change.”
Ethical legitimacy
Ethical legitimacy asks whether your AI behavior actually reflects your values. This is where lofty statements meet code and process. If you claim that privacy is a core value but expand data collection and surveillance without meaningful consent or explanation, employees and customers will notice the disconnect.
Here, leadership questions sound like:
Where do our values show up in AI design and deployment decisions?
Who in the organization has the standing to slow or stop an AI initiative on ethical grounds?
Do we have a structured way to assess bias, fairness, and unintended consequences, not just technical performance and ROI?
The World Economic Forum, NIST, and the Council of Europe have all published frameworks for trustworthy or responsible AI, emphasizing fairness, accountability, transparency, and human oversight. These are not just technical checklists for data scientists. They are tools for boards and executive teams to ensure that AI is being used in ways that are consistent with the organization’s purpose and promise.
5 Practical Moves for Ethical AI Leadership
If you sit on the executive team, here are five steps you can start on now:
Define what “ethical AI leadership” means for your organization. Don’t assume shared understanding. Translate your values into specific AI principles: how you will collect and use data, what you will not automate, where human oversight is mandatory. Put it in writing.
Stand up cross‑functional AI governance. Create a standing group that includes legal, compliance, HR, technology, and key business leaders. Charge them with reviewing AI use cases, assessing risks and opportunities, and monitoring emerging regulations and standards.
Invest in AI literacy for the C‑suite. You don’t need everyone to become data scientists, but your leadership team must understand enough about how AI works and where it fails to ask challenging questions, spot red flags, and avoid being overly deferential to “the system.”
Communicate your AI vision and guardrails, early and often. Don’t roll out AI quietly and hope no one notices. Share your intentions, the benefits you expect, the safeguards you’ve put in place, and how people can raise concerns. Communication is not a one‑time memo; it’s an ongoing conversation.
Measure trust and perceived fairness around AI. Add AI‑related questions to your engagement surveys, ethics pulse checks, and listening sessions. Ask: Do employees feel AI is being used fairly? Do they understand how it affects them? Use what you learn to adjust both your technology and your leadership behaviors.
These are not “nice to haves.” They are core components of maintaining your license to operate in an AI‑driven world.
AI Can Scale Your Decisions. Legitimacy Makes Them Stick.
Walking out of Davos this year, I was struck by how many thoughtful leaders had arrived independently at the same realization: AI will transform how we work, but the organizations that thrive will be those that protect and grow legitimacy as carefully as they pursue efficiency.
This is exactly the kind of leadership challenge I work on with Fortune 500 CEOs—whether they're navigating AI governance, board dynamics, or high-stakes transitions where there's no playbook. The question is always the same: How do you lead when certainty isn't available and stakeholder trust is on the line?
Technology can be replicated. Your culture, your values in action, and the trust you build with stakeholders cannot. If you're a CEO or board director wrestling with these questions, let's talk.


Comments