How to make better decisions through research (and when you should go with your gut)
An interview for startups with Behzod Sirjani of Yet Another Studio, ex-Facebook and Slack
I’ve found with developer products there’s often an assumption we know our user, but we often don’t. This interview is about research, but it’s really about something more fundamental to a startup’s success: the art of gathering evidence to make smarter decisions quickly.
When I want to talk to someone about making smarter decisions, the first person I call is Behzod. I had the pleasure of working with Behzod at Slack, and we’ve since gotten to team up with other companies and as Reforge instructors. Today Behzod partners with companies like Figma, Dropbox and Replit to build effective research practices.
We cover how to:
Gain confidence about important decisions without slowing anyone down
Avoid the most common research mistakes startups make
Make user insights actionable, not just interesting
This article is for you if you’re someone who needs to move fast, doesn’t have resources to waste, and doesn’t have time to learn for learnings’ sake.
How would you explain the importance of research to a technical founder who is “building for themselves” or thinks research is something only big companies do?
The first disconnect about research is that people think it is simply talking to customers. So people think that if I’m building for myself, then I don’t need to talk to customers.
Maybe at the earliest stages of your startup this can be true, but it’s always helpful to see how other people engage with that same problem you’re solving.
My definition of research is: gathering and synthesizing evidence in service of making better decisions.
If I want to gather evidence to make smarter decisions, what’s the first concrete step?
The framework I use when working with founders is asking them to fill in the blanks in the following sentence…
“I’d be more confident about <this project or decision> if I <heard/observed> <this type of person> say or do <X>.”
This allows us to have a conversation about what is the “x” that you want. Is it that you want to hear a certain type of developer talk about how they manage CI? Observe how this type of person implements a security feature?
This statement identifies what you’re trying to gather, how you want to gather that data, and who you want to gather that data from.
Once you’ve answered that, what’s the next step?
From there, you need to answer three critical questions:
Is this even feasible? Can you pull it off? Something like sending a survey to Fortune 500 CTOs is not feasible – they just won’t answer it, so pick another approach.
Is it reasonable? Think about how you would feel on the receiving end of this request. Would you, as a participant, find it fair and worthwhile to participate given your relationship to this company and how much they’re paying you?
Is it worth the effort? Is this something that’s worth my time and energy relative to all the other things I have to do as a founder? Or, is there another approach that gets me closer in a more reasonable amount of time?
For instance, when we worked at Slack, Stewart [Butterfield] was really good about asking this question. He would say that if it would take us weeks or months to do research and go from 50% confident to 90% confident, or we could ship it tomorrow and revert things back if it didn't work, we'd often choose to ship things and spend our time on something else.
How do you know whether to go with your gut on a decision or wait and gather more data?
Think of making decisions like playing the game Wheel of Fortune.
First, you have to know how many letters are on the board, and how many letters you’re trying to solve for. Is it one letter or ten?
Once you’ve answered that, you can answer what it would take to get to that one letter, or those ten letters, before you can confidently make a guess to solve the puzzle.
In other words, what data would get you to the threshold of confidence where it feels like you can move forward?
It might be that you need to:
Talk to an expert
Run an experiment
Prototype something
Do user research
Something else
This frees you up from feeling like every decision needs to go through research. Sometimes, research is the highest-leverage way for you to feel better about the bet you’re making, or de-risk it, or give you clarity – but there are many other ways to get that confidence on a decision.
Where should startups looking to sharpen their research skills start? What should they NOT do?
A common trap startups fall into is they decide they want to do more research, and then just go chronologically – “let’s do research for the next thing we’re shipping!”
Instead, you should look at your next 6 months or 1 year. Ask yourself, “if I could increase my confidence on 2-3 things on this roadmap, what would those things be?”
Usually this is pretty easy for you to answer – you want to feel more confident about an upcoming launch, or a potential pricing change, or something.
Then ask, what would make you feel more confident about these things?
Perhaps you just want more time to prototype a design internally. Or maybe you want to show some customers and get feedback.
That approach ends up being the most helpful answer to “when should I do research,” because you are just working backwards from the things that are most important to your business.
If you just want to do research to cover your ass and make sure customers aren’t angry about a change, I don’t know if that’s worth your time, especially in the early days of a startup.
I like this framing, because when people hear “research,” they often think in terms of quantitative data – but what you’re saying is, the answer to “what would make you feel confident about this” isn’t necessarily research at all.
The top-down approach of “we need to go do research” tends to lead to waste and isn’t a pragmatic way to use your time or your customers’ time.
As much as we like to think that we’re operating in very rigorous [data-driven] environments, very few of us are actually making daily high-stakes decisions to the point that we have to justify those decisions to anyone, other than our boss.
Companies that do good pre-mortems tend to be better at having a wider variety of research tools in their toolkit. They say, “we’re going to do this launch. Here are the 2-3 top risk areas that may impact the launch. Here are the ways we’ve tried to de-risk those things – to prepare the market, to think through messaging, to fix the product.”
You really don’t need to go test with every customer, or send a survey, or de-risk every possible scenario.
What are the most common mistakes startups make around research?
Not having enough clarity about who you’re building for. If you don’t know who you’re building for, you don’t know who to talk to. This means you don’t have a good sense of how to appropriately weigh or interpret feedback that comes to you.
You see this with companies who listen to everyone who gives them feedback. They end up with a Frankenstein product that doesn’t make coherent sense because it’s not built for a specific problem and audience.
What other mistakes do you see startups making?
Asking people what they want and then building what they ask you for.
Having a customer tell you “I want X” doesn't tell me why you really want <X>. It also doesn’t tell me why <X> is better than <Y>, or <Z>, or anything else in the market.
You should instead take the approach of finding the people who experience the problem that your product exists to solve.
Ask them, what have they already tried to solve that problem? How well do those solutions work? This should happen way before you ask them to describe a potential solution to their problem.
One way to avoid inadvertently asking users what they want is by avoiding close-ended questions that end in yes or no.
The question I hate hearing is “would this help you?”.
Even if the user says yes, it’s unlikely that you understand why or how it would help them. Putting “why” or “how” in front of that question changes everything.
How do you make sure you’re not over- or under-indexing on customer feedback?
Over-indexing on customer feedback is a problem I see a lot of startups struggle with. A customer tells you they don’t like the product, and you feel like they need to totally change it.
I really think about customer feedback on two dimensions:
Significance of feedback: How helpful is this feedback?
Saturation: Is this something I’ve heard over and over again, or is it one-off?
After you’ve had some conversations, pay attention to these two aspects – how consistent the feedback you’re getting is, and how helpful or significant it is.
For example, if you’re getting a lot of one-off, insignificant feedback, then you probably need to change who you’re talking to or what you’re talking about. Either you’re talking to the wrong person or you are not asking the right questions.
If you’re getting inconsistent yet significant feedback – in that your feedback is helpful but there’s no clear pattern to it – you probably need to segment your audience better.
You need to understand what it is about this person that led them to provide this feedback, and what it is about the other person that led them to give different feedback.
Let’s say you’re doing positioning research, and you’re getting inconsistent feedback. Maybe what you thought was actually one type of buyer are actually two very different types of buyers.
Buyer A is getting this type of value from our product, Buyer B is getting another type of value. I care more about Buyer A, so I’m going to go talk to more of that type of person and stop talking to Buyer B.
One common failure mode of research is that someone creates a great document, and then it just sits there collecting dust. What are some tangible ways to make research actionable?
Step one is to do research that the company actually needs.
People think the goal of research is about learning things, but there are plenty of things you can learn that have absolutely no bearing on what your company needs to do right now – or ever.
You really want to make the most critical decisions that are coming soon. The reason people aren’t reading your work is because it doesn’t matter.
Step two is to only do research in a way that aligns to how the company actually operates.
I see a lot of mismatches in timing– you need to do the research in a timeframe your company can actually use. If your company works in two-week sprints, it doesn’t make sense to do research that is going to take a month for the next sprint.
Step three is that you need to meet your peers where they are to make your data convincing.
People often share work in a way that’s easiest for them – like a deck – but you actually need to share work in the way it will be best received by your peers.
If people need to watch how customers do something to believe your research, then you need to be cutting video clips.
If people want a quote from a senior decision-maker to believe that they will buy something, you need that quote.
If my team works in Linear tickets, maybe I’ll convert my findings into Linear tickets and attach them to our epics.
If our prioritization is based on ACV, then I need to communicate in terms of total dollars.
It’s this necessary translation that needs to happen to speak to peers in their language. This usually ends up being the last mile of work, and a lot of people skip this step.
Final question. If you could convince a founder to implement just one practice today that would make a difference for their growth, what would it be?
If you’re struggling with growth, you first need to be clear about whether you have a problem of acquisition, retention or monetization.
I use this when people come to me with basic research questions like, “Should we build this feature?”
Are you asking “should we build this feature” to attract more of the right customers, to help retain our customers, or to more effectively justify our pricing? Each of these growth levers looks pretty different in terms of how I would do the work, with a very different audience.
Thanks for reading. If you haven’t already, subscribe to get our next interview directly to your inbox.
Behzod is a research consultant and advisor. He currently runs Yet Another Studio, where he partners with early stage companies to build effective research practices. He is also a program partner at Reforge, a venture partner at El Cap, and an advisor for TCV’s Velocity Fund.