One Does Not Simply Talk To Users
Talking to users is as much art as science if you want to derive genuine insights.
Talking to users is great, but getting the valuable insights we need from these discussions is equal parts art, science, and process. Unfortunately, most people asking to do these interviews focus on the process and not the result. We quickly gather dozens of research participants, prepare a script of questions to ask, interview, and then share out a massive, unintelligible spreadsheet of notes about what was said. Nothing changes for the product except that we all feel like we checked an important box. While well intentioned, the problem is exacerbated by the increased desire across the business to “be close to users.” Marketing, product, design, engineering, and customer success all want to be a part of the process, and we’re routinely not extracting the value, and instead it’s driving Founders and executives crazy. Poor execution sours leadership belief that anything new will be surfaced, and it impacts everyone whose job depends on great access. Here’s how to turn this around.
Aim Small on Expectations, Big on Volume
First, we need to set expectations. A lot of people in a growing product company need routine access to customers to do their jobs well. But an interview performed by sales isn’t the same as one done by design or one done by product. We cannot thrive in an environment in which multiple different sets of people representing entirely different interests all show up to ask their questions in the same 30-min discussion once a month. If sales owns customer relationships in your company and it believes it is doing its part by brokering scarce access like this, your sales culture requires a reset and realignment to the value the rest of the company brings to bear in growing and building a product. This is not to say that multiple constituencies can’t join the same discussions, they can, but we need to be realistic about how much we can learn in one interview and how broad the surface area of learnings really is. In one Zoom meeting we might have time for 6-7 legitimate questions. And yet here are the different people in the room:
Product wants to perform a value test and compare different solution options.
Design wants to test the usability of a new or existing feature.
Marketing wants to understand the customer’s behavioral and demographic background and process for learning about our product. It’s also looking for the voice of the customer to inform messaging.
Customer Success is looking for validation to support bug fix requests and to gauge if the customer has enough training materials.
Engineering is curious what simpler alternatives might still satisfy the need.
Sales is looking to gather data on price sensitivity and the offer.
User Research is trying to parse the complete user journey and understand deep motivations.
And the list goes on and on…
The average “good” interview surfaces maybe 3-5 critical insights across this entire realm of questions. That’s the expectation we need to set. We also need to set an intent for the discussion and that intent should include only one of the above sampling of interests. Running a usability test for a new feature in 30-mins is great. Doing a value test and learning about the customer’s motivations is another rightsized chunk of possible observation. Mixing and matching these, while efficient in the moment, leads to each team learning so little that it makes us dangerous but not informed. We have to cultivate a culture of prolific and frequent customer contact such that we don’t have ten people from across the organization all overanalyzing the same 15-minute Zoom recording…from an early stage sales call…where we have to make a lot of guesses because someone’s Uncle’s cousin made the intro… and that’s why Sal had to the do the interview…even though Sal works in People Ops and is brilliant but has no idea how to conduct one of these.
The second expectation we need to set flows directly from the insight about how little we can learn in one chat: talking to users is ongoing and the volume of touch points must be high so we don’t have to cram everything into a few sessions with a few people. The fail mode of too high of an expectation from one meeting generally comes about because we don’t have that many users to talk to, so everyone who has access to one wants to hog them all to themselves and batch up a lot of questions for these rare, magical moments. This is a symptom of an entirely different set of problems—either bad internal culture (ex: a possessive sales force) or an external signal from the market that our product isn’t awesome enough yet to have earned other people’s time in significant numbers. Highly specialized emerging technology products can be excepted to some degree if the total user base on planet Earth is less than a few thousand people. When this is the case, we need Founders or other team members who have done the job before (and recently!) or have some access or experience that allows them to speak about the users credibly on an ongoing basis. For example, if you’re building a developer tool for quantum computing and you were a quantum computing developer, then you likely can get away with this to some degree for a period of time. And if you’re building something for a community like this and don’t have that access and can’t hire, advise, or buy your way into it, that itself is a sign that this might not be the place for you to find product-market-fit.
Now, Execute With Thought
Now that we have sufficient people to talk to and narrow expectations for how much we can learn in any one interaction, we can focus on the interview session itself. Remember, if we can learn three or four nuggets of insight in the interview, it’s a win. We can’t entirely control who those insights will benefit (product, marketing, sales, etc), but we can steer the questions toward accomplishing one narrow goal, for example a usability test, a value test, or building out a user journey map. Building an amazing product requires hundreds if not thousands of great insights across every aspect from initial awareness to core value propositions to who renews and becomes a power user, so we need to be prepared that talking to users isn’t a start to finish process but rather an ongoing activity that we never complete.
Don’t Delegate To the Least Experienced
I get it, a low signal discussion with a user can be a painful use of time, and because of the process argument I made above, it’s easy to find templates or hire an agency to handle your interviewing for you. That agency, you should expect, will put its most junior person, probably from outside of the United States on the task. For those rare moments where that person is experienced and brilliant, hooray, but for most of us the result we’re going to get is really poor. The last few issues put this into context and hopefully articulate why this job is one you shouldn’t outsource (at least not entirely) and if you do, it needs to be with someone highly credible, because the results will alter your strategy, and bad results will do so in the wrong directions.
Asking the Right Questions
I won’t belabor this point as many great books have been written on the topic, but I’m repeatedly surprised how often I hear someone request to speak to users and then I join the calls and the questions prove they haven’t read or digested anything ever learned about this specialty. You get half of the interview focused on demographics or a ton of hypotheticals and questions like “What feature do you want to see built next?” or “Do you like this?” and other questions that don’t move the needle. The questions need to explore real world circumstances, and like a scientist, probe at motivations and impact without explicitly saying what you are testing. Related is asking the right questions but not setting them up systematically to prove or disprove your value hypothesis. If we’re comparing two approaches, we need to ask questions that will reveal the participants’ preferences even if unstated. For example, you might inquire about the cost to the business of not solving a particular problem, wait ten minutes, ask the same question about a different option, and then compare those results.
Follow the Threads that Matter
I’d argue that the number one issue I notice on mediocre user interview calls is when the person running the interview stays on script after the user tells a story that might be the secret to unlocking huge value. It sounds like this:
Researcher: “How important is that aspect of your job?”
Participant: “It’s pretty low for me. But that reminds me, if I don’t get the TPS report filed with all the right inputs, we get fined $20,000 per incident; it happened three times last week.”
Researcher: “Great, and can you tell us how you found our product?”
It’s important to avoid this tight focus on a set of scripted questions that completely misses the forest for the trees. Scripts are fine to the extent that they prompt you to ask about the things you want to test or learn, but charging through them mechanically tends to destroy conversation, and conversation and stories about real situations tend to illuminate real signal.
Interpret Answers Credibly With Context
This is by far the hardest to teach and where experience matters the most. Let’s take our TPS report example for a moment. If this is a good match for our business, this should be the focus of the interview. But it may be the case that the fines are levied for legal mistakes, and our company DNA is all about building hardware, for example. Knowing what’s relevant and that you are seeing the tip of an iceberg is a big part of the challenge. The other half of the battle is recommending initial solutions that get at the core of what the customer really needs. It’s pretty easy to listen to someone say that they struggle with weight loss, for example, and it’s quite another thing to diagnose that a solution that increases the percentage of vegetables in their diet through a clever delivery mechanism and fills them up at the same time is a better solution than another calorie counter. This usually doesn’t happen real-time in the discussion with the user, but the best product people are listening for these seeds and imagining many possibilities based on the customer/industry/technology constraints.
Focus Narrowly, Often, To Be Broad
It’s very difficult to uncover what’s happening in an entire industry or for a person’s specific job in one interview. Instead of trying to hit the middle ground, it’s better to aim narrowly across a wide set of customers, but using a different focus with different subsets of them. We discover the middle by going deep across many examples, not by directly soliciting details about the middle. For example, if we are building a product for reserving space on a rocket to launch new satellites, we learn a lot more by asking five customers “How did you book your last three launches and why did you do it this way” versus “What is the most common launch software?” The latter won’t hurt you, it’s just not that helpful beyond initially identifying the existing suppliers (which you could do with Google), whereas the former will give you the answer to what you’re competing with and context about what you might need to do to be superior. We have to be deep by being narrow across a range of topics.
We should also be careful of going too narrow and wasting time. A set of 20 interviews where we ask the same exact scripted questions about the password reset flow and learn nothing new after the third one is a waste. It’s equally important to zoom out often enough to avoid going really deep on lots of little problems that don’t impact the highest value issues for the customer. As much as I encourage lots of interviews, people’s time is valuable and ultimately scarce, and if we waste it, it’s hard to re-acquire it. It’s also not okay to deeply understand the moment in time we are trying to impact but be clueless about what happens right before customers seek out our product or right after they finish.
In summary, we’re not wrong to be skeptical when we get asked so often for more access to users but:
a) We can’t let access become scarce
b) We need to respect that different constituencies need different signal to be successful
c) We should manage our own expectations so we accomplish something real with each one, and
d) We need to reject low quality, templatized interviewing.
Hi! If you enjoyed these insights, please subscribe, and if you are interested in tailored support for your venture, please visit our website at First Principles, where we focus on product to help the world’s most ambitious Founders make a difference.