“I have no ability to show that we provided value and made it more likely for a customer to buy a product.”
CHALLENGE: HOW MIGHT WE PROVIDE TAILORED DASHBOARDS TO SPECIFIC ROLES AND IDENTIFY STRATEGY FOR THE PRODUCT MOVING FORWARD??
In order to increase engagement, the company wanted to conduct user research to guide design strategy and design opportunities, validate past role-based research, and provide behavioral anecdotes. The research would impact not only the designs and interfaces themselves, but also serve as a communication tool for customer-facing staff in sales and setup conversations to tailor the services and tools to the specific users.
Client: Customer Experience Management Software Company
Team: design researcher (myself), program manager, design manager
My Deliverables:
Qualitative and generative research
Insight analysis and synthesis
Prototype collaboration
Reports (goals, needs, and behaviors)
Archetype/User Roles
Research Approach
Several years ago, the company set out to understand their user landscape and identify the types of users that would benefit from their software. Based on customer interviews and observations, 10 user roles were defined, day-to-day tasks listed, relationships to each other outlined, and charts were identified for each role. These initial findings were used to help customer-facing staff work with their customers to build programs for specific departments.
With the current push to increase engagement, the product team hypothesized that building tailored role-based dashboards that would give role-specific insights that they are currently missing or need to work hard to identify in their current reports.
While previous research outlined the surface-level relationships and communications, the team felt it would be important to target and understand each role’s goals, drivers, barriers, and needs in order to build a narrative and develop more comprehensive dashboard. In order to understand these, we decided to explore the following questions:
What are the ways each role makes decisions and what are the tools they use to make these decisions? (Understanding how they do their job, tools each user needs to do their job, how they make long term and immediate decisions, how they set and measure metrics or goals, and the types of reports they are expected to give?)
How do users interact with their own customers, staff, managers, etc? (how they communicate and work with their managers/ colleagues, what data they look at regularly)
How do users interact with the software? (How often are they using the software? When they log in, what are their primary uses/actions? Other general observations)
What do users need that don’t already have? (How would they react to a prototype?)
Research Plan
CHOOSING METHODS: Based on the questions we wanted to answer, I designed a two-part research plan to:
First a generative, exploratory phase to begin to answer our questions, test the prototype dashboards, and create role archetypes
And secondly a validation phase to verify the insights, dashboards, and decks with customers
For the first round of research, I recommended a three-part interview: a deep dive discussion into their work relationships, decisions and activities, followed by a tour of their current dashboard or analogous tools to understand their usage behaviors, and finally a test of the current prototypes to test our dashboard assumptions.
I chose to start the interview with the discussion to capture answers to our first two major questions around decisions, tools, and work relationships. While the prior work helped us to identify these users, we needed to understand more of their behaviors, pain points, and general frame of mind.
For the second activity, I wanted to start to answer the third question around current software usage. By observing users in their current setup and asking them to describe the information they rely on, I wanted to start to understand their usage patterns, if they were using the program as intended or if they created workarounds.
For the final activity, I wanted to show these users our dashboard prototypes. While they were in a very early form, I knew we would be able to quickly test our assumptions and either validate or refute the direction the team was headed.
RECRUITING: Based on the recommendations of our program manager, we decided to interview three different types of customers for each of the pre-identified roles. Because the product is used by varying sized businesses, we wanted to recruit customers internally that fit our two different types of customers. We also wanted to recruit a third interviewee for each role that was not a client to round out the research and understand their working patterns. I worked internally with account managers to recruit customers and with consultants and friends of the business to find non-customer interview opportunities.
Running the Study
While the first phase of this project is ongoing, we have identified several insights so far:
While the first-round prototypes were tailored to specific roles based on the previous research findings, the customers we interviewed felt the prototypes only met their basic needs but did not provide answers to the "why" or really guide decision-making.
As we have observed in prior usability tests and discussions, a major barrier to use for these customers is the steep product learning curve.