What Do TPMs Distribute If AI Gets Better At Owning The Message
2025.15 - 4th Post - To prepare for a future with fewer TPMs, we need better ways to measure how close that future is so we can start to figure out what is left for us to do, and hone those aspects.
This is the fourth post in a series where I’m pondering out loud in public about how AI might reshape or perhaps, in my opinion, replace the Technical Program Manager role.
In earlier posts (Process) I talked about how TPMs have traditionally processed noisy, unstructured inputs like PRDs, design mocks, Slack threads, and half-written Jira tickets. I also explored how AI is starting to clean up those inputs or even generate them. In last week’s post, I wrote about Synthesis which is the work of stitching signals together into something directional, usable, and true.
But once you’ve processed the inputs and synthesized meaning, there’s still one more step: you have to get the message out.
That’s where the real pressure starts. Why? Because in many ways, distribution is where your work meets the world.
It’s where your synthesis becomes visible. It’s how your judgment travels. And it’s where your credibility either holds or cracks.
What impacts can modern Gen AI tools have on this function, lets explore.
The Art of Distribution
When I think about what I actually spend the most time on before hitting send, it’s not writing Jira tickets or summarizing meetings, it’s communicating with senior stakeholders. Writing updates for leadership. Sending escalations. Framing status. Even when driving executive reviews or steering committee meetings, you are leveraging that synthesized information to drive, to communicate an action into existence.
Why? Because those communications carry weight.
First, these people are far from the details and have almost no time. Every sentence has to earn its place. You have to be crystal clear: Are you informing them, or asking them to act?
Second, your own credibility is on the line. When you send updates to leadership, you’re not just sharing information. You’re demonstrating whether you can lead. Your ability to assess risk, frame reality, and drive the next step is under a microscope, not just from execs, but also from the teams you're representing. Do they feel like you captured the situation? Did you preserve their trust?
I feel this most acutely with escalations. Because the truth is, you don’t get unlimited chances to escalate well. If you come in too vague, too dramatic, or too late, people remember. And that memory becomes a part of your reputation.
This is why AI-powered summaries are still not at a level where I can trust AI to carry the weight properly. I am still uneasy when it comes to distribution without any intervention, curation or editorial from me.
I haven’t yet seen an example where I or anyone I work with trusted an AI agent to communicate on our behalf without rewriting it. Maybe others have. But every time I’ve seen an AI draft an update, I’ve edited it. Reframed it. Removed something. Changed the tone. Adjusted the timing. Deferred sending.
And that’s not just because the content was wrong, inaccurate or misleading but it’s because the stakes were high. And AI doesn’t feel stakes.
🤔 Which makes me wonder: What would it actually take to trust an AI agent to carry your message for you? Not just summarize a meeting or auto-tag a Jira card, but speak to leadership on your behalf? Frame the story? Own the tone?
I'm not sure yet. But I think it's a question worth asking. Let me know what you think.
There’s another aspect of distribution I’ve been reflecting on: how much we change what we say depending on who we’re saying it to or even what to say and not say.
I adjust how I communicate based on the team and the function. Design cares about user experience, demo availability, polish. Product wants to know if scope is shifting and how it affects metrics or roadmap impact. Engineers want technical tradeoffs, but not an essay. Just the relevant details that help them plan and decide.
What’s common across all groups is that I’m quietly trying to answer one unspoken question: What do you need to know, and what should you do next?
Sometimes I write this out explicitly. Sometimes I let it hang in tone. But distribution, at its core, is not just about transmitting information. It’s about shaping it for action for this person, in this role, at this moment.
Could AI be trained to do that? Maybe.
With stakeholder profiles, past context, and output tuning, it could get close. But would it know when to hold something back? When to reframe urgency without triggering panic? When to let something go unsaid? I’m not convinced just yet.
And truthfully, I’ve learned a lot about distribution by getting it wrong.
I’ve buried the lede. I’ve over-explained. I’ve shared too much detail. I’ve put outcomes into updates that only raised more questions. Each time, I had to watch the ripple effect. Back-and-forth threads, people asking for clarification, someone else jumping in to correct or reframe, these signals let you know when you know your message didn’t land or at least the impact was missed that you were aiming for.
The measure of good distribution isn’t how polished your message is. It’s how little you have to clean up after you send it. If people are still confused, if nothing moves forward, if your message sparks more churn than clarity then it didn’t do its job.
I’ve started to pay close attention to the aftermath: Did the VP immediately respond with a decision? Did the engineer take action? Did the update bring alignment or did it quietly erode it?
Because one message, if it’s off, can cost a lot. Sometimes it’s just a little confusion. But sometimes it’s a slip in trust. Or a delay that creates rework. Or a misinterpretation that forces a reset two weeks later.
What’s at stake when distribution goes wrong?
Credibility and alignment. That’s the short answer.
A poorly timed or poorly framed update can undo days even weeks of progress and trust. It can cause unnecessary churn. It can spark escalations or lead to decisions being made based on incomplete or misunderstood information. And if it happens more than once, your skills come under question. People start to lose confidence in your ability to communicate clearly and lead under pressure.
That’s the quiet cost of poor distribution, not just slowdowns or confusion, but doubt. Doubt in your voice. Doubt in your framing. Doubt in your leadership.
Which is why I still believe this part of the job is deeply human. Maybe not forever. But still, for now.
AI can help you format the update. It can pull the right data. It can tag the recipients and summarize the timeline.
But distribution isn’t just mechanics.
It’s judgment.
It’s timing.
It’s intent.
It’s the moment where you step forward and say “Here’s what’s happening. Here’s what it means. And here’s what we need to do.”
And that part, the one where the system listens, aligns, and moves, still requires a human voice.
Final Word
If you’ve made it this far, I’d love to hear from you:
When was the last time your update caused more confusion than clarity?
What’s your personal checklist before sending a message to leadership?
Have you ever trusted AI to distribute a message on your behalf? What would it take for that to feel safe?
Drop a comment or reply because I’m genuinely curious how other TPMs, PMs, and engineers are thinking about this shift.
Because the tools are getting louder. But clarity still depends on how we use them.
Until next time.
-Aadil
Really enjoyed this article. Especially this part:
===
AI can help you format the update. It can pull the right data. It can tag the recipients and summarize the timeline.
But distribution isn’t just mechanics.
It’s judgment.
It’s timing.
It’s intent.
===