Working with AI as a decision-maker
FROM HBR:
Plenty of people use generative AI for writing emails, building slide decks, scheduling meetings, and taking notes. It can feel liberating to hand off such routine tasks. But as we save time, what’s the cost?
While AI can create space for higher-order thinking, it can also tempt us to outsource that thinking altogether. The challenge isn’t just that AI is powerful; it’s also persuasive. It drafts first, sounds confident, and moves fast. When we’re under pressure, tired, or simply eager to move things forward, it’s easy to let the tool make decisions for us without pausing to consider whether we’re about to delegate thinking we should be doing ourselves to AI.
This dynamic is subtle but significant. Increasingly, we turn to AI for summaries instead of reading the original feedback. Teams are basing strategies on AI-generated trends without always questioning the source data. Managers are relying on AI to write performance reviews and assuming the tone is neutral when it’s not. These aren’t failures of leadership. They’re signs of how quickly the human-centered part of decision-making—the values and judgment that give choices meaning—can slip away.
Good leadership has never been about having all the answers. But it does demand reflection, courage, and clarity of purpose—qualities AI can’t replace. As the technology grows more capable and promises more speed and ease, complex decisions will increasingly require slowing down and thinking deeply.
In my work as a decision-making expert focused on complex problem solving, I’ve seen how easy it is to substitute AI’s thinking for one’s own. Seeing this in action with my consulting clients, I began to research remedies to counter the influence of AI-led decisions.
I’ve found that to meet this moment, we don’t need more technical skill. We need orientation: a clear sense of our role, the nature of the task, and what judgment belongs to us, not the machine. I developed what I call AI leadership anchors: four simple, durable principles to help leaders stay grounded in their role as thinkers rather than tool users. These anchors aren’t rigid rules; they’re mental cues designed to prompt awareness and keep you in charge of how and when you apply AI.
The anchors are an effective way to preserve and protect decision-making autonomy and efficacy. They offer a way to maintain control—and improve the quality and credibility of a decision outcome—even when the machine is moving fast and trying to pull you along at its speed.
The Four AI Leadership Anchors
One of my clients, “Priya,” a senior technology executive, used the leadership anchors when a long-term client expressed frustration with a new financial trading protocol. Priya suggested a meeting and turned to AI to help her prepare. Like many leaders, she was under pressure, concerned about the relationship, and determined to get it right.
Here’s how the anchors work, why each is critical, which questions you should ask yourself, and how Priya used them to make sure she was leading with her own thinking, not AI’s output.
1. The authority check
AI can be great for getting started fleshing out your ideas and developing a work product. But while it may “author” the product, you are the authority on the situation. Don’t let AI’s first draft be your final draft.
This anchor is about exerting your authority as a leader. Think of AI like an excited puppy, walking ahead of you: The puppy might think it’s in control, but you’re walking the dog, not the other way around. So let AI go first, but don’t mistake its speed for quality.
As you review AI’s output, ask yourself:
Did I get where I want to go?
Did I solve the problem I’ve identified?
Is this the tone, nuance, and context I want?
What has AI missed or assumed?
After Priya learned of her client’s frustration, she asked AI to draft an email outlining next steps. The response was polished and diplomatic—tempting to send as-is. But Priya paused and asked herself the questions above. She then reread the email, focusing on how the client would receive it. While the tone seemed neutral, Priya was struck by the email’s lack of concern for the client’s point of view. She added an opening paragraph to validate the client’s frustration before moving to AI’s proposed solution.
2. The purpose check
AI is driven by probability; you’re driven by purpose.
AI can suggest next steps, but it doesn’t know your real motivations. It will give you answers based on what other people have done. It doesn’t understand your values, context, relationships, or long-term vision.
This anchor is about not allowing crowdsourced wisdom to replace your unique perspective. Identify a point in your decision process to refocus on what matters to you. Ask:
What is my short-term goal?
What is my long-term goal?
Does the AI output reflect these goals?
Before walking into the meeting with her client, Priya used AI to prepare talking points but made sure to leave enough time to take five minutes to reflect and reconnect with her mission—both for the meeting and for her organization.
Priya’s short-term goal was to save the contract. But the only way to achieve that goal was to respond as a leader who prioritizes collaborative relationships—her long-term goal. AI had recommended that Priya be up front that the client’s request was not easy to meet. But Priya felt that her viewpoint shouldn’t supersede a discussion about the client’s needs and perspective. What AI had framed as directness felt to Priya like a relationship disconnect. Priya revised the talking points to begin by asking the client, “Can you walk me through what success would look like from your side?”
3. The accountability check
No matter how helpful a tool is, you’re accountable for the consequences when things go wrong. AI won’t be the one sitting in the boardroom or the performance review when a decision doesn’t land—you will. When the stakes are high, make sure your thinking underpins your decision-making, not AI. Ownership can’t be delegated.
Even if AI is taking notes on your meetings, you’re the one who’s actually there. Tailor your responses in real time to react to the reality you’re in. Before sharing AI output, ask yourself:
Do I stand behind this information?
Am I comfortable defending this viewpoint?
During Priya’s client meeting, the client pushed back on a few of her suggestions. She realized that one recommendation, which came almost verbatim from her AI prep, suggested framing the new trading protocol as protecting her company rather than serving the client’s needs. It felt tone-deaf.
She owned it. “That idea is not responsive to your needs. Let me offer something more aligned with what you’ve just shared.” It was Priya’s credibility on the line—not the tool’s.
4. The truth check
AI always sounds sure of itself, even when it (and you) shouldn’t be. It’s trained on aggregated data that can contain outdated, incomplete, or incorrect information, and it sometimes entirely fabricates sources and stories.
This anchor is about a truth check. Bring skepticism to AI to prevent unfounded assumptions, misinformation, and disinformation from infiltrating your work. Only your vigilance and review keep your authority intact. Actively interrogate the output you receive. Ask yourself:
Is this information verifiable?
Is there another perspective on the problem?
How could this information be wrong?
When Priya wanted to know whether the client’s frustration could also be a problem for other clients, she prompted AI to “summarize the biggest industry concerns and client pain points with this new trading protocol.” Her AI tool quickly replied with a list of complaints. She thought she’d flagged something important and prepared to bring it to her boss. But before doing so, Priya checked the list and discovered that one of the complaints stemmed from data that was two years old, and another was published in a trade blog discussing rumors, not facts. If she had trusted the output without question, she could have hurt her credibility.
Anchoring Your Decisions
With each of the anchors, Priya resisted defaulting to AI’s fast, polished output. Instead she used the leadership cues to create space to bring her judgment forward and lead with intention.
It’s easy to feel a sense of awe or even relief in the face of AI’s growing capabilities. AI is so fast and so confident that it’s easy to second-guess your own thinking and cede control. But AI is neither magic nor omniscient. AI tools can support our decision-making, challenge our assumptions, and help us see around corners, but they don’t understand our context, our relationships, or what matters most to us.
AI is an impressive tool, but it’s not an infallible guide. Don’t let it take charge. The four AI leadership anchors can help you remain in control, clarifying and deepening your own thinking in the process. You’ll lead not just with tools, but with clarity, curiosity, and conviction. Machines typically repeat or enhance what they’ve seen before. It’s humans who can dream up ideas the world has never seen.
Lightbulb © 2025
