Jared Spool Hello, I'm teen heartthrob and internet sensation, Jared Spool. And today we're going to talk about measuring user experience. Now, before we get started, I want to send my heartfelt thanks out to the entire team at the IA conference. This year, the chairs, the volunteers, and the event management team Kunverj, did an amazing job putting on a conference under incredible world events, and it is fantastic. If you get a chance, it would be awesome if you could just pop a note to the folks at the conference and let them know how much you're enjoying this event. It would mean a lot to them. And frankly, it would mean a lot to me. So, let's get started. Let's talk about a Design Leader's Secret Guide to Measuring UX. Jared Spool Our story begins a few months ago when I got a call from a UX director at a large regional bank. He had just come out of a meeting, in which the executive team had heard the recommendations of the consulting firm McKinsey on how they should make the bank better. Turns out, they had paid McKinsey as he put it, an ungodly sum of money to decide what the bank's future direction should be. And there he was sitting in the meeting, listening to their recommendations. Turns out, McKinsey stayed true to form and had three recommendations. The first one was that the bank should increase profitabilty. The second was that they should increase their market share and go from a regional bank to a national bank. And the third was that the bank should now deliver the best-designed products and services. That was it. That was what they paid McKinsey an ungodly sum of money to tell them. It's amazing how that can happen. And the reason that he called me was because of this third item. He did not expect McKinsey to say that they needed to deliver the best-designed products and services. After all, what is a consulting firm like McKinsey doing talking about the design of products? They typically stay on the financial side. How did we get here? Jared Spool Well, I'll tell how we got here. It's actually because of this person. This is Ginni Rometty. And she was the CEO of IBM when IBM decided that it was going to acquire an agency in Austin, Texas, and brand that design agency as IBM Design. That was run by this guy, Phil Gilbert. Now, the early success of this design agency was amazing. So IBM went out and invested 100 million dollars to expand their design business bringing it to both their internal products and Global Services. And as soon as IBM put design as part of Global Services, well, all of a sudden, all these other big consulting companies started putting design as part of the Global Services and that included McKinsey & Company. And then McKinsey & Company deciding that they now need to have a design effort. Well, they could train their internal people to do design, but no. Instead what they decided to do was to buy three different design firms. So they went out and they bought three different design firms. And now they have McKinsey Design, which is a group that brings design thinking and design strategy to large organizations. So that means that every single time McKinsey makes recommendations, they have to recommend that you deliver the best-designed products and services. And it was in that moment, that one moment when the UX manager heard what McKinsey said that his life completely changed. Before that moment, his biggest concern was about how does he convince his executives to value the user experience of their products and services? And suddenly, because McKinsey made this recommendation, it shifted. It was no longer about convincing. It was about showing. How does he show his executives how they've improved the user experience of their products and services? Jared Spool So it's this showing part that had him all freaked out. And that's why he called me. Now, as soon as he got out of the meeting, he started to put together a list of things that he could report. Key results that he would talk about. And that's what he wanted to talk to me about was how would he sort of collect these up and which ones would be best? And he had a list. He started sharing it with me. The first one was that maybe what they could do is increase their Net Promoter Score. Now I wanna take a moment here and digress and talk about what Net Promoter Score is. Net Promoter Score if you've never heard about it is a recommended method of measuring how loyal your customers are by asking them a single question. How likely are you to recommend our company to a friend or colleague? And if you've never seen it, and I'm sure that's not true, because you've probably answered this question a million times, it goes on a scale from zero to 10. Now, what you may not know is that that zero to 10 is broken up into three different divisions. There's a division, which is the nine or 10s, which is called promoters. There's a division, which is a seven or an eight, which is called passives. And there's the division that's six or less. That's called detractors. Now, the thing about NPS is that it isn't just an average of these scores. It's got this wacky formula. The way you calculate NPS is you actually take the percentage of people who respond as promoters and from that you subtract the percentage of people who respond as detractors. And that's supposed to give you your NPS score. But here's the thing, the numbers people put down? They don't actually mean anything. I mean, how likely are you to recommend someone, a friend or colleague? People have lots of reasons and they often don't have anything to do with the product or service. So it's actually not that helpful as a scoring mechanism, because the numbers that people put down, well, they're arbitrary. And the divisions, nines or 10s are promoters and six or less detractors. Well, those are arbitrary, too. So basically, the true NPS formula is that you take the percentage of an arbitrary number, and you subtract it from the percentage of an arbitrary number. And what you end up with is an arbitrary number. And that's what NPS is based on the calculus of arbitrary numbers. Jared Spool Now, we can be just as good at measuring how loyal somebody is, with a method that we actually all discovered back in about fifth grade, when we would pass a note to somebody, and it had a question on it, and it said, Do you like me? And two options? Yes or no. That's as effective a measurement as NPS. But anyways, I digress. Jared Spool He had that he wanted to increase Net Promoter Score, okay. He also thought, well, maybe he could decrease how much effort things will take to use the products and services that they have. Okay. Could increase the number of functions that people use, you know, get them to use more than one. Turns out, they had done some research and hardly anybody used any of the app functionality. Maybe they could start to use more app functionality. So sure, let's look there. Maybe we could see how many users use the digital apps that his team was responsible for. That's a possibility. We could increase the number of users that do just one thing, because it turns out a lot of people download the app and never even use it. So, okay, that could be useful. And finally, we could get the number of active users of their app to 500,000 active users. That's great. All of these things are fine. They're things that you can measure and they're things are good. ut the question is, is that going to deliver better-designed products or services? That's the real question that we're trying to answer. And what the UX director did is he did what we all do when we're faced to situations. We sort of go through our our menu of things that we can measure. And he sat down and said, Okay, well, these are the things I can measure. These are things I know how to measure, I have tools to measure. And he started saying, Okay, let's work on these. Let's start with these. Jared Spool The problem is that just because you have things you can measure, it doesn't make them good metrics. Most of the time, they're the wrong metrics. This is the wrong way to choose how we do metrics. The right way to choose how we do metrics is to find the metrics that actually determine that we've delivered a great user experience. So what are those metrics? Well, to talk about that, I need to digress again for just another moment. I want to talk about running. I have a friend who's training for a marathon and he is just all in on this and he really wants to run the marathon. And in order to run the marathon, his goal is basically to deliver the best race that he can. Right? He just wants to have this race. Now, he's going to be training for this so how does he show that he's actually running at his best? Right? That's the question. Well, what would he measure about running? He could measure how fast he's going. He could measure how many steps he's taking. He can measure the calories he's burning. He could measure his body temperature while he runs. He could measure his heart rate. You know, all of these things are things he can easily measure. And so sure, we can measure all the things, but those don't actually tell him if he's running a good marathon. They are not telling him whether he has made the right choices. Again, in order to understand this, we need to sort of look at what's going on. So let's take this apart. Jared Spool If we go back to what the team was proposing. They had collected all of these things, not because they're the right measures. But because they're easy measures. We shouldn't be choosing our measures because they're easy. We should be choosing them because they show us actually how we're doing. And in order to do that, we have to understand the user's experience. Fortunately, there are some measures we can use. Jared Spool There's something called UX success metrics. There's progress metrics. There's what we call problem value metrics. And there's value discovery metrics. So let's take each of these apart. When I go back to my friend who's who's thinking about running marathons, right, just like, let's first talk about his motivation, what is it that is making him want to run marathons? Well, it turns out that you can google Why does someone run and and sure enough on a website called very well fit, they have an article about 11 reasons to start running. And they sound right to me. They've got all sorts of good reasons, including it improves your health, you can lose weight, you meet new people, you run for a cause it's good for your memory. You can train for a specific goal. You can improve your energy levels, it helps you feel good about yourself. It's versatile and inexpensive, which isn't a reason to run. It's just, you know, it's like oh, I need to not spend money so I'm gonna go run. It's part you can be part of a community or It helps with stress. I don't actually think there's a complete list because there's a glaring one for me that's missing from this, which is basically, you're being chased by some bad guy. But okay, these are the reasons that people would want to go running. So if we look at the reasons that people want to go running, we have another name for this, we can call these outcomes. And lately, there's been a lot of discussion about outcomes. Josh Sidon wrote this fantastic book called outcomes over output. And it's really about, we shouldn't be focusing on our outputs. We should be focusing on the change we make in the world. What are the outcomes that we want to have, whether they're for running or they're for measuring the applications and products and services at a bank The bank's initial choices of metrics. Those were what I would call UX outputs. They measured output, how many people how many times people clicked on things, how many times they use things, how many functions they used, right? These were the outputs. You can measure outputs, but they don't say whether the person's having a great banking experience or not. What the bank needs to measure is UX outcomes. Now we can divide outcomes into two pieces. On the one side, we have what we call business outcomes. Jared Spool business outcomes are results that benefit our organization. They're things that help us make money that keep our employees happy that help us reduce costs. These are business outcomes. Now if we look at the recommendations that McKinsey made, we can translate them into business outcomes. When they said increased profitability, we can set a goal. We want to increase profitability by 15%. That's a business outcome. Being more profitable doesn't necessarily help the customer, except the Nova, how's the bank to stay around longer, but the customer doesn't really see those effects. They also said they want to increase market share and have national reach. Again, these are both measurable things, we can measure the outcome from these, right, we can say okay, market share goes to 20% National reach means we're now advertising in cities beyond our region. But what is the goal for delivering the best design products and services? How do you even measure that? Well, that's where user experience outcomes come from. User Experience outcomes or results that benefit our users. And there's a way I like to think about it. It's it's answering a question. The question is, if we deliver a fantastic design, how will we improve someone's wife? Right? So if the bank delivers a fantastic design, how will they improve the life of their customers and the people who use their products and services? That's the question we want to answer. Fortunately, the bank had done a bunch of research about this. One of the things they'd found was that most of their, what they call high worth customers, the ones who are most valuable to the organization, most of those customers are in fact, older. And some of that's just the nature of investing in money because after all, in order to be a high worth person, you have to have saved up money and money takes time. So young people unless they inherit the wealth, tend not to be high value customers. So they started to look at Well, how can we get more high value customers? Which means how can we get younger people to become high value customers. And so in looking at that, they started to study what it's like to be a young person and manage your money. And they found out that there's a lot of fear involved. These folks are afraid of their, of the of managing their money of working that way. So that was one key thing. Part of that fear is that if they make a mistake, that could cost them later, right? They pay too much in interest or they sign up for a plan that has poor fees, all of this is going to cost them more. And they don't feel like they're in control of their own banking. They feel like they do things because that's how they were taught but they they don't feel like the money is is coming in and staying where it needs to be and going where it needs to be. When they Sign up for what the bank calls a product. That's your mortgages, your car loans, your student loans, your credit cards, your lines of credit. These things, the, the young people actually find the experiences to be completely different. They have to tell the bank things they already know. And they just feel and it's filled with jargon. And one of the things that they heard over and over again was that money makes me feel dumb, right that that people just don't feel smart. Nobody likes to feel stupid, and the bank makes them feel stupid on a regular basis. So this research is very helpful, because we can take from this, what the outcome needs to be. Based on this research, the UX director and his team came up with an outcome and the outcome is having a feeling of financial control and stability. If they can get their customers at the bottom. To feel quite naturally in control, and that they have financial stability, this would be huge. Jared Spool Because that's going to lead them to pay more attention to the bank and actually give them benefit, and no banks do this. So this would give them a competitive advantage. Now, here's the thing I want to point out. In order to identify compelling outcomes. The bank had to invest in really solid qualitative user research. If they hadn't done this, they wouldn't know what outcomes to pick. We have to invest in extensive qualitative user research to get to the measures we need. So here's the thing about this outcome, the outcome by itself doesn't tell us how we're going to measure it. Just tells us where we're trying to get to. In essence, it tells us what the finish line of the race is. To understand what we measure, we have to first turn to what we call a UX success measure. And a UX success measure is how we're going to know when we've achieved our UX outcome. Again, how do we know that we have crossed the finish line of our race? Now, people who go racing marathons, they have goals. How does that runner know that they've achieved their goal? They've crossed that finish line. That's what we call a success metric. A success metric is that moment when they cross the line. success metrics are observations of physical evidence, right? We can see the runner across the line that tells us that we have succeeded. Jared Spool How they cross the line is really a matter of what we call precision. And precision is basically the quality of exactness. I digress on this for a moment here. Because it's important to understand the difference between precision and accuracy. I can shoot at a target, give four shots at a target and if I get those shots dead center, then yeah, I was accurate. I was also very precise. By getting all those shots right in the center, that's precise shooting. Now, I can get four shots very close together, but not in the center. That means I'm precise, but I'm not accurate. And I can get those four shots to be not near the center, but surrounding the center. That makes me accurate, but not precise. And I can get those four shots to just hit the target in random places. I'm neither precise nor accurate. So we have to understand the difference between accuracy and precision. And when we're talking about running a marathon, we can look at the accuracy which is crossing the finish line in different layers. precision. Now, if our goal for running is to set a new world record, well, we're going to need precision, it gets us down to the thousands of a second. If we're just trying to qualify for a major marathon, well, our precision there just needs to be in hours and minutes. Maybe, you know, some amount of seconds. If I'm setting a personal record, I just need to know to like the nearest quarter hour, Hey, I got it done and four and a quarter hours. And if I'm running my first marathon, I don't need a lot of precision. It's whether I actually succeeded or not, right? We can measure this with a very simple instrument. Did you finish your marathon? Yes or no? We add precision when we can't detect the change. range we wish to observe. That's when precision is most important. If we can detect the change, just crossing the finish line, we don't need a lot of precision. So let's go back to the reason that people start running. My friend is a very specific reason he's training for a marathon. That's a specific goal that he's looking for. But other people might go running because they need to improve their energy level, or maybe they want to be part of a community or they'd like to lose weight. Every single one of these reasons would actually have is a different outcome, which means every single one of these reasons would have a different UX success metric. If we look at the bank, the bank UX outcome is having a feeling of financial control and stability. So the Success metric, the metric that tells us that the bank has made it across the finish line that the UX team has made it across that finish line, that the evidence that shows customers have a feeling of financial control and stability. That's what we're trying to do. Jared Spool So our UX success metric is evidence showing customers have a feeling of financial control and stability. Now, remember, that's accuracy, precision, it can be really rough, we could just ask people, Have you recently felt in control of your finances? And if they say yes versus No, that might be all the precision we need in order to tell whether we've achieved the outcome. Someone who says yes, who used to say no, that's a good thing. Now figuring all this out. This is really hard. Right? measuring the right thing, not measuring what's convenient. But measuring the right thing. That's hard because we have to come up with whole new measures. Now it's really funny. Businesses work so hard to be unique. They have a unique selling proposition. They have a unique mission or unique vision. They have a competitive set of products that's different from everybody. They're always looking for their differentiators. And then when they go to measure things, they want to use the same measures everybody else does. Why? Because they want to be the same, or because it's easy to use measures that other people do. The real fact is, if we're going to do a good job of this, it's not gonna be easy. It's gonna be hard. That means that if we're going to develop UX success metrics, the organization really needs to invest in extensive qualitative user research right? This This is going to be required to come up with the metric. And then to be able to measure it. We need the research capability to do that. Now after we have our success metric, we can now move on to something we call progress metrics. And progress metrics are how the organization measures that it's getting closer to the UX outcome. Now at the Boston Marathon, there's this this thing called heartbreak Hill. And heartbreak Hill is the thing you have to get over. If you're going to get make it to the finish line. You don't make it past heartbreak Hill. It comes at like mile 20 mile 18. You don't make it at heartbreak Hill. You are going to not make it the full distance of the marathon. So, this is the key milestone, right? This is an absolute milestone. And we can measure whether people are making it through that milestone, but that's during the race if we're training, what are the milestones we might use for that? Well, turns out that I googled again and crazy compression comm has a set of milestones, right? You have to finish your first five k then you get your first blister. Apparently you have to then have your first streak of running consecutive days. joining your first running group buying your own compression socks because that's what crazy compression comm is all about. running your first 10 k achieving a negative split which means your time is faster in your second half of your race. In the first half, completing your first trail run, running 30 miles in a week, running your first sub seven minute mile. And completing the first half marathon. These are milestones they've created. Now, I'm not a runner, you may have guessed that. So I don't know if these are real milestones. But we could definitely use these as progress, right? How many of the runners that we're trying to, to cater through this process, have made it through each of these steps, and we can look at their progress. This is, in essence, a user journey, right? If you ever heard people talk about user journeys, that's what we're doing where we're measuring the progress of a user on a journey to getting to the outcome. And when we measure those things, what we're looking at is the skill sets they need the tool sets they need and the changes in their mindset that they need. That's basically it. And these are the things that we need to be able to identify the critical user milestones. What skills have they mastered? What tools do they know how to use? What mindsets do they have? When people are using the products or services? That's basically tool stuff. But what did they have to overcome to use those things? Do they have to learn skills? Did they then have to go forward and change their mind? Right? People are afraid of Finance? Well, if we give them tools, will that make them less afraid? Now they're going to need to understand how to use those tools. So you're going to have to have the skills to use the tools we give them. And then they're going to have to change their mindset that this is somehow scary into something where in fact, this is okay. And we can get back to this question. Have you felt in control of your finances? Yeah, sir. No. Now this is a subjective question, right? It's not based on objective objectivity. Everybody has a different definition of what in control is. But subjective questions are okay. They help us understand. And there's a bunch of subjective questions we can use to measure user experiences. One of the ones I've been interested in lately, was created by a guy named Sean Ellis, he came up with something that he calls the product market fit question. And this is an interesting question, because unlike MPs, it sort of talks to the current state of the user. It asked them a question about the future, but it's really about the present, right says, How would you feel if you could no longer use the product? If the product were to disappear right now, how would you feel and unlike NPS, which goes on a scale of zero to 10, which is basically 11 different choices that you have to make? Sean narrows it down to just four choices, four choices are pretty clear, you're either going to be very disappointed that you could no longer use the product, you would be somewhat disappointed. Jared Spool Which means that the product isn't as valuable to you. You could be not disappointed at all, which means it's not valuable in any sort of way. Or maybe you no longer use the product. Maybe this is just an A question that that doesn't actually help. That's it. Those are the four choices. So, what you do with this data, you throw out all the non APA goals and you don't really pay attention to the not disappointed instead you focus on the very disappointed and the somewhat disappointed leaving the knock disappointed and not applicable to In a different group. And what we want to do is really work hard to get the very disappointed group, the folks that would be really disappointed if the product went away. Those are the folks who think the product has high value. If we could get those folks to grow and shrink the somewhat disappointed group, we have hit what Shaun Ellis calls product market fit. And that means that the product itself is valuable enough that people just want to keep using it. Now, Rahul vora who's the CEO of super human, he has a variation of this that's really interesting. Not only does he ask, what would happen if you could no longer use using the same four categories that Shawn uses But he also asks each person who fills this out what type of people would most benefit from the product. What's interesting is the folks who are really think the products highly valuable, the ones who would be very disappointed if it went away. They're actually describing themselves. So this is telling Rahul who the ideal user is. And the folks who are somewhat disappointed, they're describing the people they think would use it that are not themselves, which is also really interesting. Similar way, what is the main benefit you receive? That question tells us what is driving that desire and what is driving even this somewhat disappointed group, the ones who find some value but not a time? What is driving them to use it and finally, how can we improve. Well, that somewhat group is actually going to tell us the things that would possibly push them into the very disappointed that this product would disappear group. That's what this does. And thing is, every time we get these answers, this is an observable outcome. This is just like crossing the finish line. And we can look at the rate at which people are doing this. And from there make our measures. So progress measures can be physical things like actually using the various features, but also perceptual things, the mindsets we have that say, is this thing very valuable to us? Or is it just somewhat valuable to us? And so the bank could use those things as progress measures. Now again, to develop these things, We need a lot of research, we need to go out we need to talk to customers, we need to find out what it is about the product that makes it seem really valuable or not valuable. We need to understand what their journey is. All of this is going to take qualitative UX research and a sophisticated program at that. Another type of metric that the bank could use is what we call problem value metrics. These are metrics that basically ask us what obstacles are preventing our users from achieving their outcome. And then we map that to how much those obstacles are costing us. This is a way for us to tie the UX outcomes to the business outcomes. Remember, McKinsey wants the bank to become profitable. They want the brand to increase market share. Those are business outcomes, we can connect the increased financial stability UX outcome to those things. And more importantly, we can look at the obstacles that prevent us from getting to those things. What is stopping people from having that stability today. And it turns out that every time there's an obstacle, it's rooted in frustration. And the frustration from that obstacle results in lost sales and increased costs and waste from having to do rework of writing things multiple times till people get it, or putting features out that nobody uses. All of that stuff is a cost. All of that is waste. And if we can get a handle on that waste, we can suddenly get to a point where we are succeeding for our customers. So it turns out The bank like all banks has a whole bunch of user experience problems that are costing the bank. Because they're frustrating. Customers are constantly having to tell the bank things the bank already knows. They, when they use loan and credit applications, that they're always different and they don't they can't use knowledge from filling out one to the other, even though they've told the bank many of the things that are on those forums before. When you're given choices for which credit card product to pick. How do you know what the differences are? They have 14 different credit cards, why would I pick one over the other? They don't understand how these different things work. What is a home equity loan? What is a line of credit? Why would I want one over the other? These are actually hard problems to solve and how do we go about solving them? And if the customer makes mistakes? Well, the bank's intolerable right if you miss a payment you get away from If you overdraft, you lose something else. All of these things, there's lack of tolerance. And there's no support for the users. And the customers when they make these types of mistakes, which are often because they don't understand the financial instruments that they're using. So we can start to measure these we can start to look at Well, how often do people call support? How often? Do people switch to other banks? How do they often do they stop using the products and services, and therefore any new things we build are a complete waste. And we can look at the those things. And we can say not only for users, the customers of the bank, but we can actually say employees, I mean, how often are employees struggling with the systems that we're building? And when does those first, the frustration, prevent users from feeling financial control instability, because they're not using this stuff? How does that tie into the outcome and from there How much does all that frustration end up costing the bank, we can actually come up with $1 amount. And when we can come up with a monetary amount, we can go to management. And we can say this problem costs this many millions of dollars. And if we fix it, we save the bank that many millions of dollars that increases profitability, or we can get this many more sales that increases market share. So we can tie the actual metrics to what the customer needs, back to the metrics for what the bank needs, we make them one. And when they're one, everybody wins when we improve upon the metric. So that's what problem value metrics are. And again, in order to figure out what our problem value metrics are, we need to create an extensive qualitative UX research investment we need to make that happen. The last type of metrics that can make things better, or what we call value discovery metrics. These are metrics that allow us to actually add value to the users experience, because we're already collecting data. You've seen this and you may not have even realized that you are seeing this in operation, right? You go to Amazon, and based on what people like you buy, Amazon will recommend books to you as to what you should buy. Jared Spool And they do that by just looking at the data they're already collecting. They're collecting data about what people are purchasing or what people are viewing. And they look at what you've been purchasing or what you've been viewing and they just compare and they say, Well, here's all the things you've looked at. People who've looked at the same things have also looked at these things you haven't looked at. That's how they make this work. Spotify has a similar capability. Spotify, will look at the music you listen to and how You you indicate that you like it, and it will then go and recommend more music that you would like based on what other people who've listened to similar music have liked. This discovery feature turns out to be really powerful. If you've ever used an app like ways it actually uses the other users have ways to figure out what the best routes are. It times people traveling different distances and creates this giant Markov chain. And from that, it can figure out what the most effective route for you to go on. Even if the highway is backed up, maybe backroads will get you there. So it turns out the bank is collecting a ton of data from customers. They know who the customer is, where they live, where they work, they know where they spend their money, they actually no different sources of income that they get. They know their credit score, they know How much credit they currently use. They know whether the house they have or the car has needed maintenance and what type of maintenance it might need in the future based on what other people have similar houses and similar maintenance issues have needed. So they could use all that information. to recommend products recommend services connect people with other customers who might be the type of people who fix cars who maintain houses, and they can do all this stuff. They can create budget summaries of spending patterns, they can find credit options that match the particular needs that somebody has, if they know if the bank knows that you've just added a child to your family. They can offer you different types of credit options to help deal with that particular type of thing. They could use other types of data that they collect from other customers to recommend different savings plans to recommend different vendors to do all sorts of things. And they can provide just in time assistance to help people suggest that, you know, maybe right now where you're at consolidating your loans into a line of credit, would be a better way than paying these different higher interest rates. And they can do that based on something more than just pushing out metrics and expecting the customer to figure it out. And that's the goal of all this. So that's the bank's discovery metrics. How does the the bank collect data, what type of data they collect? And can they use that data to then enhance the user experience to actually build things in? And again, there's no way to figure this stuff out unless you already are doing extensive qualitative user research and you figure out where the gaps are and what is needed. You have to do user research and has to be good user research to get to these types of metrics. They don't just appear. So let's talk for a minute about that research. The reality is, is that the only way we can get to real metrics is to already have a solid qualitative user research capability built. Jared Spool Here's the truth. If we want to improve our qualitative UX research, if we're just doing usability testing, we're not going to get there. Usability testing is not the type of UX research that gets the type of metrics we need. And frankly, surveys are ineffective. They don't help us either. There are really no Out of the box measurements that are going to help us here, right, opening up Google Analytics and looking at what they have in the box that they gave us from the beginning, not going to help. Jared Spool We have to tailor our measurements to each UX outcome. If we change the UX outcome, we have to change the measurements. Because the success metric changes, the progress metrics are going to change. All the metrics end up changing. Organizations, mature their UX research capability. They go through stages. The place everybody starts is that they're doing no research at all. We call that stage zero. But then they introduced a basic usability testing program that would be stage one. In stage 1.5, they're making their test tasks a little bit more customer driven based on what's coming in from support. In stage two, they are Moving to interview based tasks where they're asking users to tell them what they do with the product and then measuring their success in that way. Stage Three, they're actually going out into the field, they're no longer bringing users into a lab, but instead going visiting users in their home in their office, seeing how they work. They're in stage four, we're doing more focused user research where we're actually having people. We're looking for people who need certain things, we look for people just as they're applying for a car loan, and we walk them through the process of actually applying for a car loan. And stage five, we're looking at more longitudinal studies where we're looking at the entire lifecycle of a customer over time. And in stage six, we actually get to strategic user research where we're using our research to actually drive the organization strategy. And that's what we're doing here. Those are the parts of the process. Now, the reality is this. If you're doing anything less than stage four, It's going to be really hard to come up with realistic substantial UX metrics. And that's because we need that type of in depth research to drive our results. We need a human centered approach to metrics. And that means that we have to infuse our entire organization with the knowledge of the most important humans that we care about, which is our users. That's what I came to talk to you about. So if we're going to measure UX, we have to start with understanding our UX outcomes. We then pick metric categories that are going to really meet the needs, success metrics, progress metrics, problem, value metrics, and value discovery metric. And finally, all of those metrics have to start with a investment in qualitative UX research, we need a mature program to actually get the results. That's what I came to talk to you about. If you found this to be the least bit interesting, there's more information. I've got articles about each of these different types of metrics at UI Comm. You can reach me at j Spool at Ui Ui Comm. You can also if you haven't, sign up and connect to me on LinkedIn, I'd love to meet you pop me a note. Tell me what you found interesting about this presentation. And of course, you can follow me on the Twitter's where I tweet about design, measuring design, design strategy, and the amazing customer service habits of the airline industry. Thank you very much for encouraging my behavior. Transcribed by https://otter.ai