We had the chance to sit down with Marieke, the Director of Research and Product Insights at UserTesting. UserTesting has become a staple in the UX community because their platform helps companies all around the world embrace human insight, and the company continues to advocate for building products and experiences informed by user feedback.
In this interview, Marieke gives us the inside scoop on how user research has evolved at UserTesting over the years, as well as the positive impact that Research Operations has had on their business. Let’s dive in:
Sofia: When did you join UserTesting, and how has your role evolved since then?
Marieke: I came in four years ago to grow the research services team at UserTesting. This is the consulting side of our business helping our customers, who are doing user research, leverage UserTesting to its fullest. At the time it was made up of only about 12 researchers. Now we have an incredible Professional Services department of researchers, strategists, and project managers.
I was in that position for about two and a half years, and during that time we didn't have anyone doing research in our Product org, so we would actually outsource research work from my team. Our product managers and designers would have to request a project from services, just like our customers. This model can work well for certain types of research, but ends up being very reactive, not strategic, and lacked a sense of strong collaboration.
This all changed when we hired Chris Abad, our VP of Product & Design—he actually got hired when I was out on maternity leave. I came back and met him, and he immediately had this great vision for the product and the team he wanted to build. His first priority was hiring more designers; we had one designer at the time, but he also wanted to bring research into product and engineering.
Chris wanted to build his own research team, modeled after something Spotify was doing I fought him for a couple months on taking this on, but he convinced me and it's the best move I've ever made. Part of what has been amazing about this role is that we built the data science and UX research teams together. The one data scientist at UserTesting, Doug Puett, was a friend of mine from from outside of work and I had referred to him UserTesting a few years before. We work super well together and have been able to really figure out how get the most out of UX Research and Data Science and teach that to new team members we’ve hired (we’re now a team of 8!).
It's also not like we're hiring researchers to do one specific task. The product department has questions and they have to make decisions, and we help them answer those questions using the tools and skills at our disposal. It’s never “oh that’s a data science question you have to over there for that.”.. I really feel like I've gained a superpower working with data science.
One of the things that we stressed from the beginning was to not become a bottleneck. We are a centralized team, but we also try to embed ourselves in certain projects. We want to prevent people from not making any decisions even if they don't have research. And so, from the beginning, we’ve taken the approach of being an enablement team. What we're good at is solving problems, but also thinking through how to solve problems and what kinds of questions and decisions are better made with data.
What we try to do is empower our product managers and designers and even engineers to go find the insights they need themselves. Now that the ResearchOps community is growing, one of our team members, Josh Kunz, who is a UX researcher has gotten involved in the ResearchOps community. We’ve realized that a lot of what we’ve been doing all along has been ResearchOps and it’s nice to have a name for it and specifically what it means for us at UserTesting.
Sofia: When it comes to optimizing your research operations, what are you focusing on?
Marieke: Quickly gaining access to the right people for research was by far the biggest barrier to doing more research. Because we're a B2B SaaS product, we have to be sensitive about recruiting our own customers for research.
But because of the type of company we are, bringing Human Insights to every business decision, our leadership understands the importance of user-centered design and talking to customers throughout our product development process is a huge part of that.
Our PMs and Designers also really want to talk to customers—they aren't afraid to get on a call, but it used to be such a hassle to have to identify the customers we wanted to talk to then go ask the Customer Success Manager for permission. That’s also very distracting for the CSMs job.
We spent a lot of time working with Sales and Customer Success leadership to come to an agreement on how and when Product & Engineering talks to customers. A big part of this was putting a system in place to notify Customer Success Managers in Gainsight (the program they use) if one of their customers participated in a research study, on what topic, and with whom.
We also launched something this year we're calling our Customer Research Program. We built this so customers who are interested in participating in research can sign up. They help us build better products and get a sneak peek into what's coming. And, especially for our research customers, it's fun to be on the other side of a usability study or interview.
People sign up and they tell us a little bit about themselves, and we also send out a survey. We ask them questions about what productivity tools they use, prototyping tools, video conferencing tools, and research tools. Then we pull all of that into our data warehouse.
This has brought our customers way closer to our Product org. Any PM or designer can go in here and look up customers. We have also given them access to a view where they can look for certain characteristics. For example, we just launched an integration with Adobe XD. The PM working on that integration could find current customers, who’ve used our product say in the last 30 days and have told us they use Adobe XD.
I’d love to spend more time in the new year sharing with the community how we’ve built this program.
The other area of improving research operations is building out a self-service “insights” model. Our Product Insights team can’t answer every question, nor do I believe we should. Some of our product teams (and internal departments) are on their own. How can we empower them to be at least somewhat self-sufficient? This is still very much a work in progress. We currently will create dashboards for metrics that you need. We will help you on projects - how to recruit people, which methods to use, how to write your protocol. And then if you want access to data, we use Chartio for usage data, AirTable for tracking research studies, and NomNom for customer feedback. NomNom has been one way our Product Managers find customers they are interested in talking to. They’ll search for a topic and find people who’ve submitted feedback or a support request. And because of our agreement with Customer Success they can quickly look up whether or not it’s ok for them to directly contact this customer.
It's kind of like a checklist—a PM who wants to do customer interviews can refer to a doc we set up and find the resources they need, kind of like, "Okay, this is my goal. This is the type of research I want to do. Here are the kinds of customers I want to talk to."
Sofia: In previous conversations you mentioned you were skeptical about research repositories; I would love to hear your thoughts on that.
Marieke: I think tracking research work that has been done is really valuable. Knowing what the focus was for a study is really helpful to be able to refer back to. This is also important if you’re trying to measure your learning velocity, which is something we’ve just started thinking about.
But actually taking the time to store our learnings and findings when there's a shelf life for them is tough. Insights about prototypes are so specific and unique to the prototype or to our design, which makes it hard to get actual learning over time. There are learnings about our customers that have a longer shelf life, and I'm more interested in and less skeptical of the value of tracking those learnings. But even in those cases what we’ve found is that whenever we refer back to past research the specific question we have is ever so slightly different that the findings in our report aren’t the answer.
What has been helpful is to know which recordings to go back to! Especially since in every customer interaction we touch on so much, sometimes topics we don’t know are relevant for another few months. What I would love is to tag topics that were addressed in a session and be able to search for that. For example, we’re doing a lot of work on how to help our users extract insights from the videos. I would love to have an easy way to find every instance in past research where someone shared their analysis process. Why I’m skeptical about research repositories is that if the focus of that past research wasn’t analysis those topics wouldn’t have made it into the repository.
Sofia: If you could go back in time to when you just started at UserTesting, what do you wish you knew then that you know now, or what would you have done differently?
Marieke: That's super hard. I try not to have regret over past decision and instead focus on what we can do in the future. I feel like I'm still learning and still figuring it out as we go, especially since there aren’t many teams doing what we do to model after.
There’s one thing I wish we had done more of early on. I wish we had started more openly sharing what we're learning with the company earlier —we've gotten way better about sharing and being transparent. At some point there was a feeling that we have all the data but no one else at the company knows what's going on. And I really don't want that. I want the whole company to feel like they could come to us; I want to build that company knowledge. And I think we should have done that sooner—take the company along..
We have found this to be more effective on a team-by-team basis, then sharing broadly with the company. For example everyone can see all the research we’ve done on the intranet, but people rarely go looking for this information. So what we do is not big company brown bags, but smaller, like, "Hey, why don't we come to your team and chat about what we do, or what we learned, or this idea, etc." We've done a lot of what we call roadshows, where we go and talk about the work, what we learned, and then also open it up to questions.