What does a TPM Synthesize when AI is getting great at Synthesizing?
2025.14 - 3rd Post - To prepare for a future with fewer TPMs, we need better ways to measure how close that future is so we can start to figure out what is left for us to do, and hone those aspects.
This is the third post in a series where I’m pondering out loud in public about how AI might reshape or perhaps, in my opinion, replace the TPM role.
In the first post, I presented a hypothesis that AI will eventually lead to less TPMs but to measure our trajectory towards that one possible future, we need to change how we frame and measure the work done by a TPM.
I proposed a new rubric which breaks the TPM job down into five core functions: Process, Synthesis, Distribute, Accountability, Troubleshooting.
Last week, I focused on Process - how TPMs interpret inputs like PRDs, mocks, Jira tickets, and meeting notes. AI is reshaping those inputs, sometimes generating them, sometimes replacing them entirely.
This post is about the next layer up: Synthesis which is all about how TPMs turn those inputs into something usable, directional, and trustworthy.
What Synthesis Really Means
When people hear “synthesis,” they often think summary. As TPMs, a big output of our input processing is sussing out the signal from the noise. That signal goes beyond just summarizing what happened or what is in the inputs.
It’s how a TPM connects disparate dots and builds a shared mental model across teams. It’s the work of constantly asking:
Where are we right now really?
What’s coming next, and how do we know?
What risks haven’t been spoken out loud yet?
Are we aligned, or are we drifting without realizing it?
It’s not just “what did I hear?” but “what does this mean and what should we do next?”
It’s the collective “what is going on with the program?” and “when should we panic?”.
AI Is Getting Good at the Summary Layer
There’s no denying it: AI tools are starting to synthesize pieces of the role.
Tools like Granola, Supernormal, and Everyones-Favourite-Task-Tracker Intelligence feature can now summarize meetings, extract decisions, auto-tag action items, compile weekly updates across systems, generate beautiful summaries from countless PRs and code change. Even Slack GPT can read a thread and tell you what was decided, who said what, and what’s unresolved.
Even engineering agents are doing this. Tools like Cursor, Windsurf, and GitHub Copilot analyze code activity, track PRs, and generate status reports directly from developer workflows. They can even generate entire task tickets for you.
These tools are helpful. They reduce the mental bandwidth required to stay on top of everything. They make the surface-level visible. They can do majority of the heavy lifting. But it leaves me wondering:
🤔 If AI can give us the what, are TPMs still responsible for the so what or has AI replaced that too?
The Role of the TPM Is Starting to Shift
In my own work, I am leveraging these AI tools on a daily basis. What I am noticing is that I am spending more time synthesizing what happens after the AI summary.
Recently, Zoom AI gave me a beautifully clean meeting summary. Every decision was labeled, every action item assigned. It was useful but it wasn’t perfect. If you looked at those notes raw, you wouldn’t be able to make heads or toes of the notes. Why? You still lack context. Even more:
You wouldn’t know that the “decision” was made under pressure.
You wouldn’t hear the fatigue behind “we’ll try.”
You wouldn’t realize that two teams left with completely different understandings of what came next.
AI got the summary right and to a certain degree the tone or the human element of the conversation but it missed the nuance entirely.
I still needed to make minor adjustments here and there but overall, I am now spending less time keeping pace with writing stenography meeting notes and more time focusing on what the notes mean for the broader program.
That nuance is the human element that TPMs surface in a thoughtful manner through the program lifecycle. Those are the signals for when to trust the data and inputs and when your gut tells you “something smells fishy”.
We Synthesize for People, Not Just Platforms
A Great TPM knows that when you synthesize the processed inputs, you are not just summarizing but rather translating for different people.
Design wants to know when they’ll get something tangible to play with.
Product cares how decisions impact scope, value, or roadmap risk.
Engineering needs clarity on sequencing, architecture, and tech debt tradeoffs.
That is a very human aspect that AI still struggles with because it is a machine and lacks context of the audience involved.
Could AI be trained to tailor these versions based on role or title? Sure.
Could we build stakeholder profiles and tone templates? Probably.
But, what AI doesn’t yet do is break the pattern on purpose. It doesn’t know when to withhold information to prevent panic. Or when to escalate risk even if no one's asking yet. Or when to say, “I know we said X, but based on what I’m seeing, we need to rethink this.”
🤔 Synthesis isn’t a formatting problem. It’s an emotional, political, and strategic act. Is this the human element that TPMs will need to get better at in an AI future?
The Best Signals Aren’t Always Documented
Some of the most important signals I track as a TPM aren’t in any dashboard.
They show up in tone, in energy, in pauses:
How confidently engineers talk about their backlog
Whether product is asking for changes or just quietly waiting to escalate
If design is still working in iterations/wireframes when we’re two weeks from launch
When someone says “should be fine”... and then doesn’t say anything else
These are hard to teach. Hard to document. Hard to automate. They come from proximity from pattern recognition earned over time through failures and successes over and over again.
AI may eventually learn to detect some of this. But right now, it still needs us.
What Would It Take to Trust AI with Synthesis?
If I ever trusted an AI agent to synthesize on my behalf, without my review send out that synthesis to an audience, it would need more than access to my tools and data.
It would need my judgment. My preferences. My awareness of nuance, tradeoffs, and timing. The ability to selectively surface signals is what makes synthesis valuable. And that’s still deeply human. It is the value we still bring as TPMs.
Maybe we’ll get there with prompt engineering. Maybe AI agents will eventually learn how I like to work, what to highlight, what to ignore. Maybe the agent building tools will get so much better at being us, learning to be us but for now, I’m still the one stitching meaning from the signals.
Final Word
Look - synthesis isn’t going away. But it is evolving.
AI is now part of the workflow. It helps us move faster. It frees up space. But it also risks over-smoothing the narrative, filtering out the tension, the nuance, the real-time tradeoffs. And those are the things that still require us.
Not because we’re better.
But because we’re closer to the messy, contradictory, uncertain heart of the work. The human elements.
Until next time!
-Aadil
May be you have addressed this already inn one of the previous episodes, besides "synthesis", its the tactfulness that will be another milestone for the AI based actions.
In the context of strategy, "tact" refers to the skillful, discreet, and diplomatic handling of situations, especially when dealing with difficult or sensitive matters. It's about communicating and acting with sensitivity, discretion, and diplomacy, and is a key leadership trait that can enhance effectiveness and build strong relationships
“Synthesis isn’t a formatting problem. It’s an emotional, political, and strategic act.” - Profound! 👏