
I have been pondering a question: What changes does the AI Native era bring to Product Managers compared to the SaaS era? After organizing my thoughts, I believe AI-era PMs possess the following characteristics:
AI PM Core Competency Model:
- Understand Intelligence (Know the Material): Understand the boundaries of Large Language Models (LLMs) and acknowledge their probabilistic nature.
- Define Tasks (Know Translation): Translate business goals into tasks that the model can understand (Task Definition).
- Shape Intelligence (Know Tuning): Improve results through Prompting, Context, and RAG (Retrieval-Augmented Generation).
- Leverage Tools (Know Efficiency): Use AI tools (Vibe Coding/Evals) to accelerate the above processes.
This entire logic forms a closed loop of “Everything Centered on Intelligence.”
If we summarize the generational difference between the SaaS era and the AI Native era in one sentence, it is: We are shifting from “Shaping Rules” to “Shaping Intelligence.”
I. Core Difference: From “Washing Machine” to “Robot Vacuum”
To understand this shift, let’s look at an example from daily life: Washing Machine vs. Robot Vacuum. This is not just a difference in product categories, but a fundamental divergence in system design philosophy.
1. Washing Machine Mode (SaaS Era): Rule Execution in a Closed Environment
Building SaaS products is like building a Washing Machine.
- Closed Environment: Its workplace is a sealed metal drum. The chaos of the outside world (whether the room is messy or not) is irrelevant to it.
- Deterministic Decision Making: Its logic is
If Program A, then Cycle B. The PM solidifies best practices into programs like “Standard Wash” or “Quick Wash.” - Essence: Previously, when building software, we had to lock the business into a “cage.” We required users to fill out forms in specific formats and follow specific approval processes. This was to artificially create a “Closed Environment” so we could cover it with finite rules.
2. Robot Vacuum Mode (Agent Era): Dynamic Decision Making in an Open Environment
Building Agent products is like building a Robot Vacuum.
- Open Environment: It is thrown into a completely unknown, dynamically changing living room. There are chairs that might move at any time, pets running out suddenly, and obstacles never seen before.
- Dynamic Decision Making: Its logic is
Observe State -> Reason -> Make Decision. You cannot preset whether it should go left or right every second; it must rely on real-time perception (Lidar/Vision) and reasoning (Model) to decide whether to bypass an object or stop. - Essence: The core breakthrough of AI Native products lies in software finally possessing the ability to make decisions in an open environment. We no longer need to force the business into the cage of rules; instead, we let the intelligent agent walk into the real wilderness of the business to adapt to the environment.
II. The PM’s New Mission: Building a Workflow “Centered on Intelligence”
In the SaaS era, a PM’s work was linear: Requirements -> Prototype -> Development.
In the Agent era, a PM’s work becomes a loop centered on intelligence. We need to understand the model like an animal trainer understands animal habits, define tasks like a commander, and use the most advanced tools to verify the boundaries of intelligence.
This new workflow loop consists of four key links:
1. Understand Intelligence
This is the starting point of all work. Previously, PMs needed to understand “Business Logic”; now, PMs must deeply understand “Model Characteristics.”
- Know the Material: Just as an architect must know the difference between wood and steel, an AI PM must know what LLMs are good at (reasoning, summarizing, generating) and what they are bad at (precise calculation, real-time facts, stability of long logic chains).
- Know Probability: Deeply accept the premise that “output is probabilistic” and incorporate this uncertainty into product design, rather than trying to erase it with traditional hard logic.
2. Define Tasks
The PM no longer writes a Feature List but defines the Agent’s Goal and success criteria.
- From “Click and Jump” to “Solve Problems”: Previously, you defined “Click Button A to jump to Page B”; now, you define “The user wants a travel plan.”
- Task Translation: Your core value is translating vague business requirements into structured tasks that the model can understand. For example, breaking down “Write me some copy” into “Based on brand tone, targeting the Xiaohongshu (Instagram-like) platform, output 3 seeding posts from different angles.”
3. Shape Intelligence
This is the process of building the product, essentially constructing the “Brain” and “Hands/Feet” of the Agent.
- Inject Context: Decide what “books” the Agent needs to read (RAG Knowledge Base).
- Establish Principles (Prompt Engineering): Decide the Agent’s personality and bottom line (System Prompt).
- Empower Capabilities (Tools/Function Calling): Decide what tools (APIs) the Agent can use to affect the real world.
4. Leverage AI Tools — Vibe Coding
In the process of shaping intelligence, PMs themselves must use intelligent tools to improve efficiency and verification. This is also a revolution in delivery form.
- From Writing Docs to Vibe Coding: Previously, PMs wrote thousands of words in PRDs describing logic, which developers might still misunderstand. Now, PMs use tools like Cursor, v0, Replit, etc., to generate high-fidelity, runnable dynamic prototypes directly via natural language.
- Verify the “Vibe”: Only by running a Demo yourself can you perceive the “hallucination rate” of a Prompt in different scenarios and whether the response latency is acceptable. Using AI to verify AI, using intelligence to shape intelligence—this is a must-have skill for new-era PMs.
Appendix: Comparison of Work Focus in Old and New Eras
| Dimension | Old Era: Shaping Rules (SaaS) | New Era: Shaping Intelligence (Agent) |
| Core Mindset | Linear Logic | Probabilistic Thinking & Feedback Loop |
| Input Processing | Restriction & Standardization (Forcing users to fill forms) | Intent Recognition & Perception (Understanding natural language & vague intent) |
| Decision Mechanism | Hard-coded Rules (If-Else, developers make decisions) | Dynamic Reasoning (Policy, models make decisions based on environment) |
| Delivery Form | Static PRD (Text + Figma, describing expectations) | Vibe Coding Prototype (Code + Demo, verifying the “vibe”) |
| Quality Control | Bug Testing (Does logic execute as preset?) | Eval / Evaluation (Does the quality of intelligent decisions meet standards?) |
III. Three Advanced Levels of “Shaping Intelligence”
In an open environment, we cannot prescribe every action of the Agent, but we can shape its “Brain.” This requires advancement through three levels:
Level 1: Principles & Boundaries — Establishing the Decision Tone
In an open environment, since paths are unpredictable, setting “bottom lines” is more important than setting “routes.”
- Rule Thinking: Only execute the operation when the “Confirm” button is clicked.
- Intelligent Thinking: Tell the Agent, “Your highest priority is protecting user privacy, followed by completing the task. When encountering vague instructions, you must ask the user for clarification instead of guessing blindly.” You are defining the Constitution of decision-making, not specific traffic rules.
Level 2: Perception & Tools — Providing Decision Basis
The quality of a decision depends on the depth of perception.
- Rule Thinking: Pass parameter
order_idto query order status. - Intelligent Thinking: Equip the Agent with “Eyes” and “Hands” (RAG + Function Calling). Tell it: “When you want to make a decision, first check the historical policies in the Knowledge Base, then check the current stock level in the API.” You are building channels for information intake, allowing the Agent to master sufficient Context during decision-making.
Level 3: Evolution & Feedback — Optimizing the Decision Model
This is the highest level of “shaping.” Fine-tuning the Agent’s “intuition” through real-world feedback.
- Rule Thinking: Fix code bugs.
- Intelligent Thinking: Establish Eval Sets. Analyze: In yesterday’s 100 complex consultations, how many times was the Agent’s decision too aggressive? How many times was it too conservative? By adjusting Prompt weights or Fine-tuning, fine-tune its decision scale. This is like training an athlete: constantly reviewing game footage to improve real-time judgment on the field.
IV. New Challenges Brought by This Shift
From “Rule Machine” to “Intelligent Agent,” the biggest pain points are Explainability and Trust.
In Washing Machine Mode, if the clothes aren’t clean, we know it’s a program setting issue.
In Robot Vacuum Mode, if it knocks over the cat food bowl, is it because it “didn’t see it”? Or did its decision logic deem “knocking over the bowl is better than taking a detour”?
The PM’s New Mission: How to establish a visual “Decision Supervision Mechanism” while unleashing the potential of intelligent decision-making?
We need to design steps for the Agent to “Self-Explain,” such as requiring it to print its thinking process (Thought Trace) before outputting results. This is not just for debugging, but to give humans the confidence to let intelligence handle complex tasks in open environments.
Conclusion
The outstanding Product Managers of the future are essentially designing the “Worldview” and “Values” of intelligent agents.
Previously, we guaranteed success by restricting the environment (Closed SaaS); now, we handle complexity by shaping intelligence (Open Agent).
To achieve this, PMs must complete their own evolution: Deeply understand the characteristics of intelligence, precisely define the goals of tasks, and skillfully leverage intelligent tools like Vibe Coding.
When we stop trying to hard-code every line and start thinking about how to enable systems to make “probably correct” decisions in an open world, we have truly crossed the threshold into the Intelligent Era.
Leave a comment