How to do persona mapping with 50+ users
As a UX researcher at a historically data-driven company, I’ve come to appreciate the importance of and confidence in quantitative data. We are always asking ourselves: What’s the interaction rate? Click-through rate? What’s the revenue comparison? How many impressions? And, these are important questions in order to measure success.
However, nothing is quite as satisfying as being in a weekly planning meeting, reviewing the performance of an A/B test and seeing flabbergasted looks around the room. The Product Manager might exclaim: “What? That doesn’t make any sense!”, to which we (the UX team) might say: “Well, actually, in our recent usability tests, we learned that people don’t necessarily understand the functionality of [X]”. The word “interesting…” is quietly spoken, as we pause for a moment of reflection. That’s just it — even though we wouldn’t survive without quantitative data, qualitative insights are an incredibly powerful resource for understanding why a feature performs a certain way.
I feel fortunate to be part of the driving force behind user-driven design at Intent Media. I have the pleasure of bringing attention to user needs, shedding light on unusable features, and encouraging my colleagues to avoid self-referential design. In doing so, as any UX professional knows, personas help remind your team of who your primary users are, what their needs are, and how they behave.
So, naturally, I was tasked with creating our UX personas. After writing up a research plan, hypotheses, fine-tuning my screener, and finding my target audience, I began recruiting and interviewing users. After 23 sessions, I paused and dedicated a couple of weeks towards analyzing the user data. Initially, I followed the steps outlined in this blogpost. I brought the UX team together in a conference room for three hours, and we started mapping post-it notes on the whiteboard. We created a dimension for each key characteristic and behavior that was relevant to our research:
As we made our way through the three-hour process, we realized that this was going to be more difficult than what we initially thought. With 23 users and 16 dimensions, we had a tough time observing trends on the physical whiteboard. (My brain felt like it was melting from trying to process all the colors and letters!) What might have been an obvious cluster at start, was no longer obvious at the end. So, we took a step back and figured there must be an easier way to do this.
We decided to digitize the data, give the dimensions a range of scores from 1–5, where the far left = 1, the middle = 3, and the far right = 5. So, for example, for our first dimension, frequency of travel, a score of 1 = travels on average once / month and a score of 5 = travels on average once / year.
We thought: “Well, perhaps an engineer can help us build an algorithm to do the observation for us?”. Now, you’re probably thinking “engineers don’t have time for that” to which I’ll agree. We approached a couple of engineers on our team, and while their insights were thoughtful, their time was limited.
So, Tony, our UX manager, took it upon himself to hack together a Python algorithm that returns a group of n users who share the same exact score across m dimensions. The script loops through each user and cross-checks their dimension scores with other users. For example, if three users score a 3 in dimension_x, a 4 in dimension_y, and a 5 in dimension_z, then they would be a group of three who share the same exact score across three dimensions.
We figured that if the script could spit out groups of 3, 4 or 5 users that shared the highest number of dimensions, then we would have a great starting point. (Check out https://jsfiddle.net/ for an easy way to run scripts and share them using your browser, or http://jupyter.org/ for Python.)
Sure enough, it worked! The algorithm returned three groups of four users who share the same exact score across four dimensions and five groups of three that shared the same exact score across five dimensions.
I then used Google Sheets/Excel to filter for these groups in order to qualitatively understand what these scores meant. This also allowed me to manually scan for similar dimensions.
I defined a similar dimension as a dimension that shares the same score across users within strictly one point exclusively above or below and this group of users consists of 80% or more of the total users in the group. In Table 3 above, the similar dimension is dimension_z, where the score of 1 is exclusively one point below the majority score of 2, and this consists of 100% of the users in the group.
The explicit definition helped me look for similar dimensions that were meaningful. This also allowed me to count the number of meaningfully different dimensions and to evaluate them. I defined a different dimension as a dimension that shares fewer than 80% of the same or similar scores across the total users in the group.
I ended up with a group of six users, which informed my first persona. Once I established who those six users were, I went back and read the interview notes and watched the recordings before drafting up the persona. I had to refresh my memory and re-familiarize myself with their goals, motivations, and key behaviors. Once I had done that, I created a detailed persona document (2–3 pages) and a one-pager version. Et voila! Persona number one, complete.
What I love about this process is that we can easily rinse and repeat. We now have 58 users and 20 dimensions (through interviews, we found that we had to adjust a couple and add a few). Imagine a whiteboard with 1,160 post-it notes… At this point, we have had to adjust the algorithm to account for similar dimensions as well because we were seeing at least 70 groups of five users who shared the same exact score across five dimensions. We are also looking into having the algorithm return the dimension name and its score, eliminating yet another step in the process.
As we re-run the user grouping algorithm, we’re seeing new groups emerge, groups that are different from the first and second personas we already created, and to this our answer is: let’s iterate!
Some other benefits of quantifying the qualitative:
- Quickly scan through a filtered spreadsheet as opposed to 100+ pages of notes
- Easily observe your data distribution to determine any gaps in your demographics
- Win brownie points with the quantitative data enthusiasts at your company
Some unavoidable cons:
- It’s not a perfect science
- The data is supervised and subjective
But of course, as a data science company, it doesn’t end there. Once we have enough data, i.e. more than 100 or 200 users, we will work with our data science team to perform a K-means cluster analysis. Stay tuned!