PRDs That Don't Suck
The simple template I've used repeatedly to maximize innovation, drive alignment and clarity, and keep stakeholders happy. Includes example inputs to guide the process!
“PRD” is cringe, use spec instead
Let’s start by replacing this outdated term with “functional spec” or just “spec.” I detest “PRD.” Here’s why:
The “R” for Requirements implies that Product is the master and everyone else is the order taker. False! Great outcomes come from blending disciplines to identify the best solutions.
The label implies that the document contents and the product are static and won’t change. Both are false.
Due to its baggage as part of the hardware waterfall movement, even the term itself makes people focus on process instead of outcomes and mission impact, which puts people in the wrong mindset.
Yeah, yeah, yeah, just take me to the template now please!
Yet they remain necessary at scale
On teams where you can skip the specs, please do so. However, most teams that are achieving some degree of success end up scaling to the point where keeping everyone aligned, and in the loop becomes a bottleneck to quick, decisive action. Use specs when:
Alignment is required across a large group of diverse people representing different functions (ex: sales, marketing, CEO, legal, engineering, QA, etc)
The complexity of the work and problem are poorly understood across the team
You’re starting a new, large initiative
So here’s what must be included
Background and Goals
Explain what problem exists for the business and customers, at a high level in a few sentences, so we can all understand why it’s worth our time to read further. Don’t write an expository essay with five paragraphs, just a few sentences will do. If it won’t, you haven’t distilled the core issue.
Here are some examples:
FIRE OUTBREAKS
“Our customers track fire outbreaks across the world, but when fires are near population centers, there’s a lot of manual work required to check how close a satellite thermal signature is to a particular building or set of people. Our product gives users access to all of these data sources, but there’s no nice way to get this answer even with all of that data without layering it on top of each other. Our plan is to make a canvas that combines these insights into one map view to give rapid answers for any fire so that customers can identify proximity to populations in seconds, not hours.”
SYSTEM PERFORMANCE
“System search performance is causing customer attrition, and we need to fix it. We lost 15% of our user base this quarter, speed is the #1 complaint coming in to customer success, and 95% of searches run an average of 15+ seconds, while industry benchmarks show than anything more than 1-3 seconds is unacceptable. Our plan is to attack the root architectural causes and add thoughtful UX flows for the most taxing queries that cannot be sped up.”
SEMICONDUCTOR DEFECT AI
“Semiconductor fabs are struggling to predict and prevent defects in their newest 3nm chip manufacturing processes. Currently, engineers spend 4+ hours per defect analyzing multi-sensor data from hundreds of process steps to identify root causes, leading to $2M+ in scrap costs per month. While our platform collects all the relevant sensor and inspection data, engineers must manually correlate across multiple dashboards and tools. Our plan is to create an AI-powered defect analysis workspace that automatically surfaces potential root causes by analyzing patterns across historical process data, reducing investigation time to minutes and catching systematic issues before they impact yield.”
That’s it, we don’t need more. Anchor us to right where the problem or opportunity exists, no more, no less.
Who’s It For?
A well-crafted spec clearly defines the target users to align marketing, sales, and product teams around who the product serves and how success is measured. This means specifying who the specific users are and what we know about them—their demographics, behaviors, work context, and motivations. This is not a place to repeat your Ideal Customer Profile for the whole product—it’s a place to call out more concretely which subset of those individuals care about this particular feature. Save space here while being precise!
Examples:
“Medicinal chemists at customers with <50 employees who cannot write code”
“Users who have created 3+ custom reports in the past six months.”
“Aerospace engineers at commercial satellite companies who manually process 50+ thermal imaging datasets per week”
Scenarios and Research
Scenarios define the real problems real customers are experiencing now that this spec is designed to solve. Share real-life stories about specific users facing clear, detailed problems. These are not generic user stories or synthetic aggregations. If you’re starting with “Users feel that” then stop and start again writing about a concrete human you’ve spoken to. If you can’t name a concrete human, stop writing the spec until you can. This level of specificity exposes the nuances of the problem and guides the product team toward meaningful solutions.
A strong spec should include at least two to three distinct scenarios, each from different customers but all tied to the same core problem. These scenarios should be varied to reflect different user contexts but stay true to the problem statement. Without multiple real scenarios, there is a risk of shifting from solving a user problem to pushing a solution. A well-designed product should address all these scenarios through a single solution, and as the feature develops, new scenarios can be added to ensure the product continues meeting evolving user needs. By the end of the PRD, the proposed solution should clearly solve these scenarios at least to an MVP level, ensuring alignment across product, design, and engineering teams.
This is also a great section to include research that helps make the case, explain the urgency, or position why this matters more than other work.
Examples:
DRONE INSPECTIONS
Dave Chen at WindAccelerators managed an inspection of 47 wind turbines last week using our autonomous drone fleet. When post-processing the imagery, he discovered that changing cloud conditions had created inconsistent lighting across the dataset, forcing his team to manually adjust contrast settings on over 8,000 images to run our detection AI. The 3-day delay meant their client missed a critical maintenance window. Dave needs a way to automatically normalize environmental variations across large inspection datasets before running computer vision analysis.
AI CODE CREATION WITH APIS
Dr. Sarah Patel at LLMsRUs was evaluating their new language model's ability to write code when she noticed it was hallucinating API endpoints that didn't exist. She spent 2 weeks manually sampling 1,000 model outputs and cross-referencing them against documentation because their evaluation tools couldn't automatically detect fictional functions. The delayed feedback meant the training team had already started their next iteration with the same issues. Sarah needs a way to automatically validate generated code against real-world API specifications at scale.
Success Metrics
Next, we want to describe what outcomes we should see when the feature is live. After all, we only build to increase traction or revenue, so if we can't describe how the feature will impact one of these, we probably should not be building in the first place.
A popular model is to consider three styles of metrics in this section: business metrics, customer metrics, and technical ones. For example:
Business:
-Earn the first pilot customer in [new market/new use case]
-MRR increases by 5%
Customer:
-25% of new users create a report with [new feature]
-Average searches per user increase from 2.2 to at least 2.5+
Technical:
-99% of API requests return in under 1.5 seconds.
-Maneuvers with less than five parameters compute in under one second
Be as specific as possible and make sure that you add stories at the bottom of your spec for any additional metric tracking that's required to prove out these hypotheses. It’s far less important that you have lots of metrics or many categories of metrics and more important that you have at least one, measurable way to know if the feature produced the desired outcome.
Approach
Up until this point, we haven’t said much about how we’re going to solve the challenges above, or how we will achieve our goals. The approach section is all about defining this and it covers three areas.
The User Experience
This is where we want to include the user journey map, designs, mock-ups or sketches, or any descriptions of how we expect users to interact with the system. If it is a technical feature, this might be a list of the API endpoints we plan to support. It doesn't matter who is using it; this is the section that describes how they will use it and what their experience will be. Please remember that “experience” is not limited to User Interface!
This can also be a link to a design document.
The Architecture
In this section, generally in a diagram, we’ll show how the pieces of our system come together to deliver the feature in a scalable, maintainable, and efficient manner.
Software architecture is the high-level structure of a software system, defining how different parts of the system interact and work together. It includes decisions about components (like databases, interfaces, and services), how they communicate, and how data flows through the system.
This can also be a link to the Technical Spec for more complex features where more room is needed.
The Technical Approach
For shorter specs where not much space is needed to describe the technical approach, I like to include it in this section, but for more complex ones, I generally use this area to link to a Technical Specification document. There are plenty of great guides that describe what goes into a technical spec, but in short we need to address how we will technically build the system, what it’s constraints will be, which technologies we’ll use, and how they come together to deliver the feature.
Stories, Release Plan & Prioritization
This is where we add a table that summarizes the small bits of functionality that together will add up to our feature. I like to use a simple table in the following rough format:
Story | Time Estimate | Priority or Release
To avoid duplication of work, the “Story” in this context is frequently not a full user story or job story, which I put into the ticket instead. This list is more of a quick chop for prioritization purposes and for everyone to see at a high level what’s going to get included and what isn’t. For teams working in Sprints, you can assign each story a sprint number, or for other styles of prioritization you can simply mark items as P0, P1, P2, P3 to show what’s most important to get done in what tranches. This helps the spec to live beyond one sprint, although personally I don’t recommend that it live for more than 1-4, otherwise people lose track, and it’s a lot easier to copy the top half and add some new stories and make a new spec to keep everyone aligned.
If there is a separate technical spec, then I rarely include technical stories in this list, but if it’s all on one document (which I prefer if it’s smaller), then it’s fine to include user-facing or developer facing high level stories. This is NOT a replacement for a complete decomposition of the work that will live in the ticketing system.
Example:
Allow users to download a PDF report of the calculation | 1 day | P1
Track usage of the calculation feature | 4 hours | P2
Integrate Segment to receive all events | 2 days | P3
Typically I find that the sweet spot is 5-15 stories. Beyond that and you likely have something too big for one spec, and fewer than that and it’s likely you don’t need a spec at all, just a few tickets in your existing system.
Here are the nice to haves
If we have one of everything above, in many cases we’re done, but I find that complexity lives in many forms and shifts from feature to feature. Here are some nice to have sections that you should consciously decide if you need each time.
Assumptions and/or Risks
What assumptions are we making about the user? About the market? About human behavior? About adoption? About 3rd parties? What risks do we have embedded in this execution? Are there problems that have never been solved before? Are there dependencies on other teams? These are great to get out into the open so the team can solve them proactively before they block progress.
Experimentation Plan / Adoption Plan
If our uncertainty around the success of the feature is modest or high, then it frequently makes sense to define how we will get feedback on the new feature. Who will we speak to? Will we run A/B tests? How will we roll it out? In stages? To whom first? Please note that by the time we get to a spec, we should have already done a good deal of discovery work to validate the value of the feature. The mistake most teams make is to assume that once a spec is written, the work should proceed from start to finish without reference to how the customer is receiving this new feature at each increment we release into the wild. This is where we can get clear on how we’ll iterate and with whom.
For more mature features in larger companies, this can be a great place to link to a marketing and/or GTM plan in addition to an early adopter strategy.
Test / QA Plan
For more complex features, or even as a matter of good ongoing hygiene, it’s helpful to link to the QA plan, or if it’s simple, to include it in the spec. I find that great specs are about 65% content, and 35% links that other specialists can follow but that don’t need to block reading of the main value and approach content.
Non-Goals
Particularly for more ambiguous features, it’s helpful to re-assure everyone of what is NOT expected to be built, who is not going to be served by this feature, and any other work we explicitly know should not be included in this phase. This helps everyone breathe a sigh of relief and focus where it counts first.
Specs and Releases
While there is some tidiness to having one spec for each release of a feature, in practice I find this leads to teams spending far too much time writing, and not enough time on everything else. Additionally, on the strongest teams, every commit is potentially a viable externally facing release, if the team decides it is worthy. For both of these reasons, I see no reason to directly tie specs to releases. Instead, I encourage teams to use the stories section to define how specific subsets of the work will be released to the world. I highly encourage product teams to release as often as they have sufficient value that could either help a user or elicit feedback. Sometimes this is as small as deploying a new table sorting method, and other times it’s a much more meaningful piece of work. That is a judgment call for the product and engineering professionals within each organization, not something a template should specify.
Rapid Fire FAQs
How long should a spec be?
Ideally, 1 to 1.5 printed pages of text. Certainly no more than three pages. Keep it concise or no one will read it and alignment won’t happen. Link to external content and force yourself to synthesize and summarize. The spec is not a replacement for the level of detail that goes into your ticketing system. However, it can be a nice place to centralize all of the most important content and links to related content about that feature to help align functions like engineering, product, sales, marketing, and QA.
How long should the work last from one spec?
It could range from about one sprint to a few months. I recommend a scope that will last between two and eight weeks. Anything less and the writing is occupying too much process time, and anything more and the team loses the beat, alignment wanes, and urgency evaporates.
How many features go in one spec?
One spec is for one big problem statement. There can be many facets to that problem statement, but we do not mix disparate features together. For example, if the feature has billing implications, usage analytics needs, or onboarding changes, that's fine so long as they're all part of the same feature. However, if we do not yet have billing implemented at all in our app, we would use a separate spec for billing and a separate one for the current feature. In essence, the spec is for one coherent feature; it is not a summary of all the things that will go into a particular release, broader than that feature.
Here’s the template
You’ve made it all the way through. Here is my template with a quick summary of each section. Let’s go build something world changing!