The Startup Product Diagnostic
The seven areas I inspect to assess startup health in the 0-to-1 phase
One of the most frequent questions I receive as a product-focused startup builder and advisor is something to the effect of “If you were our advisor/coach/product leader, what would you do in your first 100 days to help us?” It’s not an easy question to answer in a pithy way, so I’ll dedicate this post to the strategy and the model, bearing in mind that the specific implementation will vary dramatically based on what is found at each stage of the diagnostic.
Value hunting
The most concise way I can answer the question is “I will hunt for and sniff out the value.” Value can take many forms, of course, from technical intellectual property to business and customer value, but suffice it to say that the strategy is to first find what drives the business in the mind of customers, what makes the largest and most differentiated impact to their mission, and to understand in client engagements which parts of the value proposition matter most. Of course, the diagnostic is much broader than this and includes everything from who is on the team to the tools in use to the more substantive issues just mentioned.
First, listen and absorb
“Where is the roadmap?” and “What should our strategy be?” are often asked within the first two weeks in a new setting. This is well intentioned but ludicrous. The desire is benign, and one cannot blame an organization for immediately wanting clarity, but creating first-class answers to these questions takes on-the-ground information that even the best-equipped product leaders won’t have when first engaging. Crucially, when a firm early in its journey is hiring a new CPO or VP of Product, usually it means these problems are not being solved organically. An example is a killer set of technical founders who aren’t sure how to build and scale a business. It’s cheaper and easier to align with fewer cooks in the kitchen, so traditionally the CEO or another founder is playing these roles if they can, and if they are not, it generally means there is a gap, in which case the diagnostic answers will not be obvious or in plain sight, or there could be too many answers and no decision yet about which ones matter most. By contrast, a successful startup without a product-scaling challenge at its current phase likely already has these answers, even if they aren’t well documented or don’t reside within one person’s head or a specific job title.
The one thing
Before we can move into the methodical set of diagnostic challenges, we must first be clear about what one thing the product is supposed to do. Later we will determine if the one thing is the right one thing, but the first step is simply to ask founders what it is. What problem in the world is this product designed to move the needle on? Why does that problem matter? Who cares about solving that problem? Armed with a sense of the one thing, I begin to engage in a methodical review of the key factors and disciplines that will propel successful outcomes. As an aside, it is okay to have secondary objectives, but the top objective must be clear and it must be at the top; otherwise all subsequent experimentation, research, marketing, sales, and feature building may be misaligned to this one thing. A few examples of the one thing might sound like “integrating disparate datasets” (Palantir) or “helping people remember hard-to-remember details about people they meet” (NetworkNerd), or “building a self-driving car” (Waymo). Of note, the one big thing does not have to take the form of formal customer problem statements, and it also isn’t the marketing or positioning statement; it is simply a factual expression of what it is that this organization thinks it is doing right now, and it needs to be the value proposition that would sell the product alone even if there were no other features.
With the one big thing in mind, I enter the diagnostic phases. Generally I’m tackling these in parallel as schedules allow, and because early findings in each area may inform questions worth asking later in the others. I try as hard as possible to just listen and not opine on what should be, at least not yet.
Team
The team diagnostic is all about the people-startup fit. Does this organization have the right players on the team, with the right expertise and the right backgrounds and the right DNA to match the one big thing? Generally I begin by interviewing each of the founders, each of the executive team members, and the team itself. I’m asking questions like “Why did you found this company in the first place?”; “What are your strengths and weaknesses, and the company’s?”; “What’s working and not working?”; and, perhaps the most interesting, once the baseline is established, “What are you most proud to have achieved or built here in the past few months?” The goal during this phase is to identify if everybody fits, if there are any gaps in talent, or if any roles need to be adjusted, or even to surface if there are people who are simply out of sync with the strategy. The phase includes employees, contractors, and consultants, if they’re playing primary roles in the business. It may sound obvious, but I’ve frequently discovered at this phase challenges such as a startup building cutting-edge database technology without a database expert, or doing hard science work without a full-time, equity-compensated scientist, or attempting to uniformly shape the output of 100-plus engineers without a product manager (all real examples, with company names dropped to protect the well-intentioned). It’s rarely anyone’s fault per se, but startups move so fast that it’s easy to migrate into a state in which key roles that should exist are unfilled or in which a coding-academy graduate with two months of experience is doing long-term architecture. It’s not that they cannot be successful this way—the person and their passion reign supreme, not the resume or pedigree—but there must at least be a plausible story for how the talent matches the strategy.
Product management
The first thing I do with the product is play with it. This may sound obvious, but it’s shockingly overlooked. My first task is to touch every single button, every single drop-down, every single setting, across every single product or product line at the company. I’m usually hit in the face pretty quickly with some sort of blocker: sometimes I just don’t understand what the feature is intended to do because of a lack of domain knowledge, but sometimes the features just don’t make sense. The first job in the product portion of the diagnostic is to figure out what’s actually there and what value each feature is believed to be delivering to the user, and of course to surface blocking bugs and issues, for example crappy performance or an inability to log in.
Next I attempt to learn to give a demo. Learning the demo is huge because it generally forces you to validate the feature set, validate the product development execution, validate a sane promotion/DevOps/release model, and even validate the messaging of the product. I frequently find that pre-Series A, giving the demo is not straightforward. It’s entirely okay if the demo is backend technology only, but if it cannot be shown in some way, it likely cannot be valued or sold to a customer, so this is essential.
At the same time, I want to consume the existing roadmap or backlog so I don’t waste time writing down features everyone already knows are missing.
Finally, I’m looking to see what sort of intelligence-gathering machine exists. How does validated customer learning make its way back into the product? Is the team using Linear/ClickUp/Jira with a backlog, is there a feedback email or Slack list, are they implementing Intercom, Productboard, etc.? Is the top customer success person tightly integrated with the product team? Same for sales and marketing. It’s key to be sure that what is learned is not lost and that it’s captured in some future referenceable manner.
Engineering
For a long time I didn’t include engineering as a phase even though I always performed it, because in more hierarchical or traditional organizations, there is a lot of pushback when a product person or with a non-computer-science background starts probing around the architecture and technology stack choices. This is a fallacy, but it’s also a delicate balance. Product must understand the opinionated decisions that engineering has made in its path toward solving the customer’s problems. Product must understand these tradeoffs and decisions (1) to be intelligent in front of the customer, (2) to have intuition for what’s hard and what’s easy to build, to avoid a small experiment taking too long or to avoid asking for an experiment thinking it’s hard when it’s actually easy, (3) because sometimes these decisions don’t meet customers’ future needs or add up to unique IP on the path to the big thing, and (4) because sometimes product can contribute directly to the tech stack with small bug fixes or getting hands dirty pulling analytics and prototyping or testing.
If the company is breaking ground and innovating on the technology end, the first basic gut check is whether the CTO or VP of Engineering is amazing. In cases where the technology itself is not hard to build, it’s not as essential, but there still must be competent engineering leadership to resolve weedy technical challenges, to spike out answers to new challenges, to be the technical voice in front of the customer, and to resolve day-to-day engineering tradeoff decisions such as how to model data or which architecture to select in any given feature’s implementation.
Market and marketing
Next I’m looking to understand the market, the messaging, and the marketing. What market does this product live in? How does it operate? Who does the market trust, and what are the incentives? A great tip I’ve learned on this one when facing a new market is to talk to a hedge-fund portfolio manager who invests in the sector, as they will often have a fantastic five-minute-or-less synthesis of how the market “works,” which will help you to back into the assumptions you need to check and test.
For example, at Slingshot, when we experimented with selling overhead imagery to consumers, we had to first understand that the existing satellite imagery market was dominated by the federal government and mostly consisted of a few players that owned the lion’s share of the satellites, and that the aerial imagery market largely was acting at the behest of insurance companies, local governments, and architecture and engineering firms. Together, these implied that orders had to be large, which informed a huge amount of follow-on decision-making. Once you understand the basic mechanics of the market, you need to understand but not obsess over the competitors. You’re not done with this phase if you cannot identify at least the top three competitors, their value propositions, and why yours might be different.
Next, how is the business and sales end of the company talking about the product, and is the customer listening? Are you using the right terminology to meet the customers in their world, not yours? At Palantir in the early days, we could have walked into government agencies and said, “We have a novel take on the graph database and numerous services that will unify records from multiple systems into it,” but instead we learned to first say, “We can take a suspected terrorist target and see how they are connected to people you can access through an understanding of their digital footprint across clandestine collection systems.” Needless to say, the latter lands much more powerfully.
Finally, what is the go-to-market strategy? I will admit, I hate this term—it encompasses far too much to far too many people, so I will scope it down. For an enterprise business, how does the business plan to earn its next dozen or so customers and, for a consumer one, the next 10,000 (if earlier-stage). Is this answer plausible, and does it match up with the roadmap, value proposition, and engineering decisions? For example, if sales thinks the path is through multinational consumer packaged goods companies and the product only caters to startup tech bros, there’s a problem.
Customer sensing
Who are the company’s existing or future planned customers, and why did each customer buy the product in the first place? Despite the “one big thing,” what is each of the customers actually using the product to do? What business value is this product generating for these customers, and if they didn’t have it, what would the alternative state of affairs be and how painful would that be for them? Usually this can be assessed through a great discussion with customer success or sales engineering, but I highly recommend also getting some firsthand customer signal even at this early phase if it’s politically possible (if it’s not, that is itself a red flag). Often the translation of the desire or need is lossy, and precision is everything in product. Precision in understanding customer needs and delivering exceptional solutions to them is the difference between Facebook and every other social media company we no longer speak of, and it’s the same in every industry. The startup ecosystem is littered with “Well, my company was Y [example: on-demand food delivery] before X [Postmates].” No, it was not, and this is harsh, but it’s the reality. If you were X, you would be X, other than issues of market timing, for example trying to be an AI-driven company 20 years ago. Generally, the other company, X, understood something about the customer just a little bit better, or at least did something about it, and that is why they won. Of course, maybe you do understand the problem perfectly but you have a co-founder explosion or run out of money, so I don’t mean to imply that Y never had a chance to be X for only this reason.
In this diagnostic I am also looking to assess traction. How many users, what churn is like, what the customer acquisition cost is (if that’s relevant, which it is not always early on), and every derivative metric that matters in product belong here. Crucially, this is where I am looking to find a ride-or-die customer, and ideally a compelling set of them. There are many names for this customer, but I like the rap-lyric version “ride or die” because it means a person who is so in love that they will ride with you even though they know you are a criminal, and would rather die or go to jail than not be with you. Of course, it’s not nearly so dramatic in startups and we obviously don’t want to be anywhere near illicit activity, but the degree of love is the same—do you have a customer who doesn’t care how much about your product is broken or failing because that one thing is so important to them and you solve it, so they will stay with you at nearly all costs?
Finally, I’m looking to understand the job stories, user stories, and/or story-mapping for the user’s journey in the product. With analytic products these are a bit more open-ended, and with transactional products they can be airtight. It’s less important how great they are at the beginning and more important that they are understood widely, regardless of what format or philosophy reigns in the culture.
Strategy, vision, and roadmap
With all of this signal in tow, I begin to look critically at the existing strategy, vision, and roadmap, and I’m looking to assess if they line up, if they are credible, and if they leverage the startup’s unique DNA. Lining up is easier to assess—if the target market is commercial insurance and the persona is the claims adjuster, then I’d want to be sure the user stories reflect that adjuster and his/her specific needs, and the same for the marketing and messaging etc. The test of “credible” is more of a judgment call. Does the company have the track record, the credibility, and, as Yale/Deloitte used to call it, the eminence and “brand permission” to represent itself as a group that can credibly solve this problem. With that in mind, I begin to construct and/or revise a single document that encapsulates the vision, strategy, and roadmap and begin to solicit feedback on it. The roadmap, in my mind, is always a living document, especially at early-stage companies, because so often our assumptions do not pass muster in the market, so I aim to revise and update it no less than once a month.
Hunting for value isn’t a sprint; it’s a deliberate excavation of your company’s DNA. In those first 100 days, you’re not just building strategy artifacts but uncovering the heartbeat of your product. So before you start pontificating about strategy, listen, absorb, stand back, and assess. Then, armed with ground truth, you can start crafting a vision and structure—and team—that’s more than just pretty slides: it’s a battle plan for delivering knockout value based on the company’s unique strengths and the market’s unique gaps.
Hi! If you enjoyed these insights, please subscribe, and if you are interested in product coaching or fractional product support for your venture, please visit our website at First Principles, where we help the most ambitious founders make a difference.