Agency in the Age of Artificial Intelligence
Photo by Alexander Sinn on Unsplash
Many professionals, leaders included, are having a new experience right now: they try AI on a real piece of work — a draft, an analysis, a plan, a messy problem — and it doesn’t just “help.” It moves the work.
Sometimes by a little. Sometimes by a lot.
That can feel exciting. It can feel unsettling. Often it’s both.
For leaders, the question isn’t whether AI is useful. The deeper questions are human and organizational:
• What happens when a new “team member” enters the room who isn’t a person?
• How does AI change the leadership work we do with real people?
• Who is accountable for what AI produces — and for how it shapes our team’s behavior?
• What happens to leadership authority when an AI can outperform humans in certain kinds of thinking?
• How do we strengthen people’s agency when there is now a tool that often sounds like it “has the answer”?
The list could go on — and if we let it, it becomes wearying.
So let’s slow down and name what’s underneath.
At the heart of these questions is one word we’ve been exploring in recent posts: agency.
What’s at stake is agency
In Leadership Actually, we describe agency this way:
Agency is the ability to act autonomously and with purpose and to make meaningful decisions that shape outcomes.
It isn’t simply “doing things on your own.” It is having the confidence, skills, and freedom to act in ways that align with your goals and values — and, in the best of cases, those of your organization.
That definition matters here, because AI introduces a new and subtle leadership challenge:
It can dramatically increase our capability… while quietly training us not to decide.
AI can analyze faster than we can.
It can synthesize more information.
It can generate options we might not have considered.
It can recommend a course of action with striking confidence.
And that raises a leadership question that is not technical, but deeply human:
If the system produces the analysis…
If the system drafts the strategy…
If the system recommends the “best” path…
What happens to agency?
Do we continue to become more capable agents of our own future and encourage others to do likewise?
Or do we slowly begin outsourcing the very things that contributes to our humanity in the first place: the practice of choosing, deciding, and owning?
The new leadership temptation
Every era brings a characteristic temptation.
In the industrial era, it was the temptation to treat people as interchangeable parts in a machine.
In the bureaucratic era, it was the temptation to hide behind policies and procedures instead of exercising judgment.
In the information era, it has often been the temptation to confuse speed, metrics, and visibility with real effectiveness.
In this era, it may not be laziness.
It may be abdication.
It sounds like this:
“The system approved it.”
“The algorithm decided it.”
“The model recommended it.”
Those statements may describe a process.
But they do not describe responsibility.
A leader may authorize a system to operate within defined boundaries. But leadership does not disappear simply because a tool has become powerful.
If you allowed it, configured it, relied upon it — you still own the decision.
That clarity will matter more, not less, as AI capabilities expand.
AI as capability, not conscience
Used well, AI can become a powerful capability inside a team.
It can surface blind spots.
It can test assumptions.
It can model scenarios.
It can accelerate learning.
It can strengthen analysis.
In that sense, it can function like an additional cognitive instrument in the room — helping the group see more clearly.
But it is not a conscience.
It does not bear risk.
It does not repair trust.
It does not answer for outcomes.
Leaders remain responsible for setting boundaries, defining what “good” looks like, weighing tradeoffs, and absorbing consequences.
No tool — however intelligent — can replace that.
AI can do the work. It can’t take the heat.
Here is a line leaders will need to hold firmly:
AI can help with decisions. It can’t be the person who makes them — because it can’t be held accountable.
Or more plainly:
AI can do the work. It can’t take the heat.
When something goes wrong — ethically, financially, relationally — no one gathers the team and asks the software to explain itself.
They look to a human being.
They look to the leader.
This is not anti-AI. It is pro-agency.
AI can extend human capability.
But it cannot replace human responsibility.
The deeper opportunity
There is another side to this moment.
When execution becomes easier, the relative value of judgment increases.
When information becomes abundant, the value of discernment increases.
When automation expands, the importance of agency sharpens.
This is not a call to resist AI.
It is a call to mature alongside it.
Leaders who learn to use AI as a capability — without surrendering their authority or diluting their accountability — will be stronger, not weaker.
But that requires discipline.
It requires remembering:
You are still the one who chooses.
You are still the one who signs.
You are still the one who answers.
You are still the one who takes the heat.
That is the craft of leadership.
And that craft remains profoundly human.
As you experiment with AI in your own work, consider:
Where are you using it to extend your capability?
And where might you be tempted — even subtly — to surrender authorship?
Agency does not disappear in the age of AI.
But it must be practiced deliberately.