When a New Team Member Isn’t a Person
Photo by Bolivialnteligente on Unsplash
On Monday, we explored what the AI-driven acceleration and leveling up of capability means for leaders, teams, and organizations. We also offered some ideas about what leaders can do to help people match that speed instead of becoming prey to it.
The same thing that’s happening almost everywhere is happening with us at the Co-Creative Leadership Alliance, too. So we decided to use our own process questions—the same ones we wrote about in Leadership Actually—to help us get a handle on this phenomenon.
Here’s what it’s looking like for us as AI makes its way into the rooms we work in.
What’s happening?
We’ve been using AI on real work: drafts of blog posts, structural edits, framing questions, messy thinking that needs shaping.
And something became obvious quickly: it doesn’t just “help.” It changes the tempo of the work.
We can generate options faster, see structure sooner, and iterate more quickly than we could on our own.
Who’s involved?
On the surface: our team and a new tool.
But in reality, more is present: our standards, our voice, our sense of authorship, and our responsibility for what we publish.
And hovering just outside the room is another presence: the belief—held by more than a few people—that using AI is cheating.
We understand that reaction as a natural human fear of shortcuts, hollow work, abdication, or loss of control. Those are not silly concerns. They’re signals about what people value.
What’s our reaction?
Curiosity laced with caution.
It’s been tempting to treat AI output as “nearly done” simply because it looks and sounds so damned polished. But we’ve come to see that the real risk isn’t poor writing.
It’s a subtle shift away from our own authorship toward simply reviewing what the tool produces.
So we’ve slowed down and begun asking:
Is this accurate?
Is this ours?
Does it reflect what we actually believe?
Would we sign our names to it without qualification?
Those questions matter more than the tool. They help us maintain the personal power of choosing—and continue being agents of our own work and lives.
What are we learning — and re-learning?
We can make use of new capability without relinquishing ourselves—but only if we’re clear about accountability: who has it and who doesn’t.
AI can draft, suggest, and analyze. But it cannot be accountable. It cannot be answerable. It cannot absorb consequences. Only humans fit those bills.
So the practice is straightforward (though not always easy): use the tool to extend thinking—then do the human work of shaping, reshaping, and, once you get it right, deciding.
That’s what we’re working to do ourselves and encouraging our readers to do as well. Not to outsource judgment—but to practice with a new instrument in the room.
To experiment openly.
To compare drafts.
To notice when capability expands—and what it takes to stay in authorship while it does.
Based on our experience so far, here’s what we’d say about the “cheating” question:
If AI is used to avoid thinking, that’s not growth—it’s abdication.
But if AI is used to support thinking, and the team still owns the judgment, the values, and the final decision, that isn’t cheating.
That’s craft, practiced with a new instrument in the room.
When a non-human teammate is in the room, leadership doesn’t disappear. It becomes more visible—just at a deeper level.
And the charge becomes clearer:
Use the capability. Keep the accountability. Practice agency on purpose.
If something here resonates—or challenges you—we’d love to hear what you’re experimenting with. What’s one way you’re using AI (or another new capability) without giving up authorship? Share a small example in the comments. We read every comment. Drop us a line.