Ship the right stuff: A small SaaS team’s lean user research methodology

At BugHerd, we don’t make any updates or introduce new features to our product without some form of user research to back up our decisions. But as a small team, taking the time to do this, can have a big impact on your product velocity.

User research is critical to our product development process, helping us to:

  • Determine what problem(s) we should be solving for our customers.
  • Get a better understanding of how our customers engage with our product, and why they use it.
  • Most importantly, reduce the risk of committing time and resources to product updates that customers don’t use.

In this article, we’ll give you an insight into BugHerd’s research process for product development, including where user research fits in, the methods we find most useful, how we define a research piece and how we decide when we know enough to make a product decision.


Where user research fits into the BugHerd product development process

At BugHerd there’s really 2 key parts of our process where customer research is critical.

  1. Strategy, roadmap, planning – making sure we understand our customers and move our product in a direction that supports them.
  2. Feature development – making sure the solution we build actually solves the problem.

1. Strategy, roadmap & planning

Customer conversations, the “Always on” research

To truly understand your customers, you really need to make a habit of talking to them. At BugHerd we try to facilitate these conversations from every possible opportunity such as product demos, support calls, survey research follow ups, and automated onboarding emails. 

The great thing about these touch points is that the contact is almost always initiated by the customer. It’s on their terms, whether they just want to be shown through the product, have questions about pricing, or are stuck with something in the product, we’re able to deliver value (answers to their questions) while also delving deeper with our own questions to really understand the problems they’re trying to solve with BugHerd.

Another great aspect of these conversations is that we speak with both new and existing customers. Every day new customers are discovering BugHerd, so it’s important we understand how the needs of these new customers may be changing over time in comparison to our existing customers. When talking to existing customers it’s easy to fall into the trap of always engaging with the same cohort of customers which may give you a narrow view of the customer problem.

Importantly, these conversations are fairly free-flowing and open to allow for as much discovery as possible. We may have a few key questions we like to ask, to understand a bit about their business, the problem(s) BugHerd solved for them, and where they discovered us, but beyond that, we keep it open.

In the end, these conversations have the most impact on developing our product strategy & roadmap, and most importantly, hone our judgement for when we need to make product decisions (more on this later).

Tools that enable this…

Zoom, Typeform, Zapier, Vero/Intercom

Product Market Fit survey

Popularised by Sean Ellis, we have had success extending upon the great work of Rahul Vohra & the team at SuperHuman. For those of you who haven’t heard of it, the Product Market Fit survey is a simple survey framework to help you iterate your way toward better product market fit.

At its core, the product market fit survey asks customers the following question…

How disappointed would you be if you could no longer use our product

  • Very disappointed
  • Somewhat disappointed
  • Not disappointed

From this we can understand who we are really solving problems for (those who answer “very disappointed”) and who, with a little more effort, we can ideally convert from being sort of into our product, to really loving it.

Without going into too much detail, here’s how it works for us.

  1. Survey your paid customers after they’ve been using your product for 3 months (this timeframe is important to ensure you’ve getting responses from engaged customers). If you’re interested, here’s an example of ours.
  2. Segment the customer responses (we use job role) to find your core customers. For us, that’s Project Managers.
  3. Analyse the feedback from the customers who responded “very disappointed” to the question above. We want to understand what makes them love the product so much, and double down on making those features even better.
  4. Analyse the feedback from customers who responded “somewhat disappointed” to the question above. We want to understand what features are missing that they really need in the hope it will convert them into the “very disappointed” camp.
  5. Build out your feature roadmap by doubling down on the feature your users love, and addressing the missing features that hold others back.

If you’re interested in more detail, check out Rahul’s article which does a much better job of describing the process than I ever could.

For us, this process was fundamental In the early days to getting our product back on track.

Tools that enable this…

Typeform, Zapier, Vero/Intercom

2. Feature development ux research

This is about understanding how the identified customer problem could be solved, what critical features to solve it, and importantly making sure the solution we design solves the problem.

At the beginning of any major feature development cycle, we conduct UX research in order to pinpoint the underlying problem our customers are facing and that we are trying to solve. This can come in the form of (but not limited to) customer interviews, surveys and desk research. This helps to give us a customer’s view of the problem before we jump to the solution and hopefully reduces the biases we as designers might be bringing to the table.

Once we have a solution design, we then validate whether the ideas/designs that we have come up with can solve our customer’s problem. This can come in the form of (but not limited to) moderated and unmoderated user testing. 

Finally, after much iteration, when we’ve finally completed development of the feature, we’ll do another round of user testing or release as a closed beta to some specific customers to gather feedback before releasing to our whole customer base.


Deciding what user research methods to use

Speaking directly with your customers is king. Which is why we prefer to conduct customer interviews and moderated user testing, as our main methods of research.

Whether it’s formal, with a really focused interview script, or informal customer support conversations, the insights can be gold.

These are the types of research formats that allow you (as the researcher) to ask open-ended questions or enquire further on why a user might think, feel or behave in a particular way. 

Of course, we aren’t always able to speak directly with our customers for our research. BugHerd is a piece of software that’s used across the world, which means the timezones of some of our research participants don’t always line up with ours. 

When customer interviews aren’t possible, we’ll use survey research as a substitute. 

And when moderated user testing isn’t possible, we’ll set up an unmoderated user testing environment. 

Method 1: Customer Interview

Objective:

Customer interviews are a qualitative form of research.

They’re a great way to uncover the reasons behind problems (e.g. why users behave the way they do, why something is a pain point etc).  

Use customer interviews as a way to identify potential themes and patterns of user behaviour.

Timeframe: Min. 4 weeks

  • 0.5 day to define the research brief and the interview questions
  • 1-2 days to recruit participants
  • 30min – 1h for each interview
  • 1-2 days to synthesize the interview insights and summarise it in a report

Important to note:

  • Interviewing 5 users may be enough for themes to emerge in your research. It is recommended that you go for at least 10 users.
  • Interviews are a forum in which you can ask “open-ended” questions. Use them when you need to generate useful information via a conversation rather than a vote.
  • Interviews are an “attitudinal” form of study, meaning the data you mine is self-reported by users. Avoid asking users questions that require them to recall actions/thoughts from a long time ago, as that can sometimes elicit inaccurate or made up answers.
  • ALWAYS allow time for the customer to ask you questions. Often a customer will have agreed to the call because they’d actually like help with something else entirely.

Method 2: Survey

Objective:

Surveys are a quantitative form of research.

They’re a great way to get a consensus on a topic (e.g. how many people do this vs. do that).

Timeframe: 1-2 weeks

  • 0.5 day to define the research brief and the survey questions
  • 1-2 days to recruit participants
  • 1 day to synthesize the interview insights and summarise it in a report

Important to note:

  • Survey at least 30 users. The more participants you have, the better, as your data will be less susceptible to anomalies.
  • Online surveys do not offer the opportunity to follow up, so be sure to articulate the survey questions well.

Method 3: Moderated User Testing

Objective:

User testing is a way to validate your design (via a prototype) and uncover any usability issues that may exist.

Timeframe: 1-3 weeks

  • ½ day to define the research brief and the user testing questions
  • 1-2 days to recruit participants
  • 30min – 1h for each user test session
  • 1-2 days to synthesize the interview insights and summarise it in a report

Important to note:

  • In a moderated environment, a research facilitator will sit in with the participant as they test the design prototype.
  • The participant’s actions/feelings can be observed by the research facilitator as they interact with the design prototype. In this environment, the research facilitator is afforded the opportunity to ask follow up questions if needed.
  • 5 users may be enough for themes to emerge in your research. 5 users may be enough for themes to emerge in your research. From our experience, we’d recommend having at least 10 users.

Method 4: Unmoderated User Testing

Objective:

User testing is a way to validate your design (via a prototype)  and uncover any usability issues that may exist. 

Timeframe: 1-2 weeks

  • ½ day to define the research brief and the user testing questions
  • ½ day to setup prototype in an unmoderated environment (e.g. Lookback)
  • 1-2 days to recruit participants
  • 1-2 days to synthesize the interview insights and summarise it in a report

Important to note:

  • In an unmoderated environment, the participant tests the design prototype in their own time, and their thoughts are documented either via video recording or a follow up survey.
  • 5 users may be enough for themes to emerge in your research. 5 users may be enough for themes to emerge in your research. From our experience, we’d recommend having at least 10 users.
  • Unmoderated user testing does not offer the opportunity to as follow up questions, so be sure to articulate the user testing questions/tasks well.

Defining our research pieces

We use ‘OKRs’ (Objective and Key Results) as a simple goal-setting framework for everything that we do, including our user research efforts. 

OKRs are comprised of an objective (a clearly defined goal of the research) and the key result(s) (measurable success criteria used to track the achievement of that goal).   

In addition to the OKRs, we also define who we need to speak to, what research method(s) will be used and what are the areas of interest to focus on.

Here is an example: 

  • Objective – Test the updated feedback UI with real customers to validate the latest design decisions. 
  • Key Result(s) – Determine if there’s enough of a positive response in order to securely proceed with the UI updates
  • Who do we need to speak to – 
    • BugHerd customers 
    • Member and guest roles 
    • n=10-20 (i.e. 10-20 customers) 
  • What research method(s) – Moderated user testing 
  • Area(s) of interest – 
    • Has this improved the feedback experience on mobile? 
    • Are users able to discover the often-missed ‘screenshot annotation’ feature of the current feedback UI? 
    • How do users feel about the updated look & feel?

Assessing the weight of research for product development

So how do you decide if the research validates, dispels or is inconclusive. And if inconclusive, where do you go from there? 

At the end of the day it’s always a judgement call. So you need to make sure your judgement is on point.

Honing your judgement is like training a muscle. The more conversations you’re having on a weekly basis with new & existing customers, the more empathy and understanding you have for their problem and the better your judgement will be. This is where the “always on” research we spoke about above is your secret weapon.

That said, if you’re having trouble deciding whether your research validates or invalidates your hypothesis, we’d first question whether we had the right audience? Do the people you spoke to really represent those who you’ve identified as having the problem. If not, how can you better talk to those people, OR, if you can’t find them, do they really exist in high enough numbers for this to be worth doing?

If the audience looks good, question whether the problem/feature is actually as critical as you first thought and importantly, reconsider it against other ideas in your roadmap. You should always be ready to kill an idea if you’re not confident it’s the right thing to do. 

In summary, the point of research is learning. Sometimes you’re lucky enough to learn something that casts things in a new light. When this happens, be ready to adapt rather than charging on just because “it’s in our roadmap”. Like Elon Musk says of the SpaceX process… “Assume your requirements are wrong. Question everything”.

image of a product team discussing user research

Credits:

About Richy -

Comments are closed here.