Sarah Flamion Hello. Hello, welcome to our discussion of how to turn messy data into meaningful insights. I'm Sarah Flamion. I'm a research architect at Salesforce. And I'm presenting today with the very talented Kellie Carter, who's a lead researcher at Salesforce. Like many of you, Kelly, and I work in an enterprise where it's our job to help make sense of complexity. Let's start with a story to illustrate this point. Here's a seemingly simple enterprise scenario that's probably familiar to most of you, you the consumer shop on a website, you add a product to your shopping cart, and then maybe you get distracted or you decide to think longer about your purchase and you leave the site to go do something else. This is what's known in our world as an abandoned cart. If the brand YOU'RE SHOPPING WITH is on top of their marketing game, you'll likely receive an abandoned cart email which will try to entice you to complete your purchase. Behind the scenes, this seemingly simple event email scenario is actually pretty complex. The brand needs to know you the consumer they need to ask Access to the product name and image and description of the item that was in your cart. They ideally want to link you directly to the cart to reduce purchasing friction. And they're likely also trying to personalize your email with related product recommendations, maybe personalized offers, directions to the nearest store, product reviews, maybe even shipping details. All of these details require data from across the enterprise managed by many different teams, and any research we conduct around abandoned carts needs to take all of these nuances and complexities into account. Which brings us to this reality enterprises are messy. There are huge amounts of data which are often stored really inconsistently. Problem spaces are large, they're complex. They're often owned by multiple functional teams. Questions are constantly evolving based on factors both within and outside of the enterprise. And boundaries between projects can be blurry. Sometimes different project teams are asking similar questions or projects might have overlaps We're here to tell you to take a deep breath and fear not researchers can save the day. And you can trust us. Between Kelly and I, we have several decades of experience working on big, messy cross cutting projects at enterprises across multiple industries. It's important to remember that synthesis of data is not just a discrete step in the research process. It is concentrated in the moment after data collection is complete. But really synthesis should frame all the decisions we make throughout the into and research process. Sarah Flamion Today's talk is going to be structured into three parts. In each we'll be talking about the choices that were made throughout that research process to drive effective synthesis. And the first part of today's talk will talk about techniques for dealing with projects in new product spaces. And the second will cover techniques for dealing with projects in large scale customer events. And then the third, we'll share some lessons learned principles that we've discovered and now live by. The techniques we'll explore can be used in other scenarios as well. But we thought it would be helpful today to give you some context as we're explaining that. Our goal is that you leave here excited to try these techniques yourself. And to empower you to do that, we'll be sharing the tools and strategies we used to make these techniques successful. We also know though, that the world right now looks really different than it did just a few months ago. And so throughout our presentation, we'll do our very best to talk about ways these techniques may be adapted for a remote research scenario. So let's get started. Our first situation is a new product space. When enterprises explore new product space, they're often grappling with broad high level questions. What do our customers know about this space currently, if they're already involved in this space, what does that current reality look like? What expectations Do they have for products that are in the space? What problems are they currently experiencing? What solutions might be most helpful? Often the solution ideas have cross product to impact. So that might be on the customer side, the product space might span multiple systems that they're already using. Or it might be on the enterprise side, you may be exploring problems that span domains covered by multiple existing products. And because of this, multiple functional teams are often involved in exploring the space and envisioning solutions. And the questions themselves are evolving. insights need to be specific enough for those functional teams to act on them, but broad enough to have lasting impact. And lastly, there may be multiple sets of relevant customers. So it might require multiple types of customers to complete an end to end scenario in that product space. or there might be more than one type of customer engaged in this space. So where do you begin, we're going to cover four strategies for generating valuable insights in this type of scenario. The first we're going to talk about is scored concept testing. Let me set the stage for this example. I was assigned to conduct research on a project where we were exploring a very new product space, primarily focused on retailers. I hadn't project stakeholders, and we brainstormed possible product concepts that we could explore in this project. We had over 80 ideas surface as a result of this brainstorming. And then I had stakeholders asking me how do customers think about these ideas? And where should we focus? I can prioritize ideas all day long. But prioritizing among over 80 ideas is a pretty interesting challenge. I knew that my synthesis would need to provide stakeholders with really actionable guidance around the most critical concept ideas for us to explore. But I also knew that they needed a broader understanding of how customers were thinking about the product space currently, and that I didn't want to get too bogged down at this stage with products solutioning. To accomplish this type of synthesis. First, I work together with my stakeholders to categorize those concept ideas. For this project, we use categories that were related to a typical retail lifecycle, from inspiring customers and encouraging them to discover a brand's products through to the buying cycle and eventually all The way to servicing and re engaging customers. The category can be totally different depending on your project. But the idea is that you need to break a huge amount of ideas into something more manageable. But just prioritizing my category isn't what the team needed here. They were curious about the concept ideas themselves. So my research partner and I used a technique where we worked with participants to interactively score those concept ideas by category. We did this remotely, we created a set of swim lanes, and we put each of our ideas into a text box. We discussed the concept with customers, and then those customer participants were able to drag the box to a swim lane. The swim lanes ranged from not an interest to our company. Interesting, but not on our roadmap, really valuable, but not on our roadmap is two or more years out on our roadmap is a year or less out on our roadmap, and our company already does this. And as they moved each concept into a swim lane, we talked about their reasoning for their choice, and then we repeated for each category. Now this might seem similar to a Moscow technique you may have used where you have people indicate if ideas are, must have, should have, could have or won't have. And that works as well. But I personally think this can also add some additional granularity. It can highlight ideas, customers are already in the process of implementing, meaning it's valuable enough to them that they're already devoting resources to it, or ideas that they might already have. And as such may not feel very innovative. Sarah Flamion Once we completed this activity, we were able to assign scores to each idea by giving a numeric value to each one like that data let us draw a quantitative narrative for our stakeholders. So the examples that you're seeing here we use R and the help of a fantastic designer on our team to share out visual indications of the ideas that had the most resonance. This helps stakeholders have a quick at a glance understanding of the findings. And then we were also able to use the data to help stakeholders understand nuance. So for example, we're responsible Different for customers who frequently engaged with us versus less engaged customers. And we were able to compare responses across categories so we could highlight which categories had more resonant ideas. But valuable synthesis is not just about reporting data. It's important to explain the why behind the findings. We used quotes and examples to add color to those quantitative scores. And we were also able to share some qualitative insights that surface in discussion. So for example, many customers across many ideas told us that they really needed a 360 degree view of their customers across the steps. And we saw a trend of trepidation around ideas that were perceived as being too futuristic. The ideas were interesting, but customers were pretty willing to let somebody else be on the bleeding edge of trying them out. This pairing of quantitative and qualitative insights helps us provide more depth to stakeholders as they determine how to move forward. The second strategy We're going to talk about is concept framing. So in this example, the project was around a new idea helping customers better connect their marketing service and commerce systems. We had a number of concepts being explored, but not nearly as many as in the past example. In this project, what we wanted to understand was at a slightly higher level, how are customers feeling about these concepts more generally? And how interested were they in Salesforce getting involved in that space? We didn't need a prioritization at an individual concept idea level, we needed to help clarify a vision across multiple products in teams. Because I knew the type of synthesis that was expected, I crafted an activity at the concept category level. So I again worked with stakeholders to group concepts into categories similar to that last example. In this project, three of the categories were around things that we consider to be fundamentals necessary to implement our big idea, connecting marketing, commerce and service systems. And then two other categories. Were around That big idea itself. In this activity rather than having customers give us input at an individual concept level, customers placed the categories themselves on a two by two grid. The spectrums we used in this case were how valuable customers felt it was for Salesforce to get involved in that concept space, and whether or not they already tried to implement that idea themselves. And again, the spectrums you choose can be really different depending on your needs, but the idea is to get a comparative understanding of the different categories. And then we spent some time after the activity discussing the rationale behind participant choices. While this was done in person at a workshop with customers following a conference, this is an example of a technique that could easily be adapted to a remote setting using a tool like Nero or jam board. Then it was time to do synthesis. First I shared direct findings. So there were some obvious findings like the pink category here clearly appears to be Something where customers saw value in Salesforce being engaged and had not yet attempted to implement the idea. I looked at results individually. And I also looked for patterns across categories. So maybe two categories who had similarly spread responses. In addition to direct findings, synthesis also included a review of the discussion details sort of the why behind these placements. This, however, brought up it did tell us the why but it also brought up some additional and broader insights. So as I was analyzing that qualitative data, I made a note when the discussion moved away from just a direct explanation of placement of a category. And I gave a topic tag to those divergent comments. I just did this in Excel. I wasn't using a sophisticated tagging system. But when I group those topic tags, I was able to pull out additional themes that were really insightful. So for example, in this case, customers told us that our big idea was really valuable. They definitely needed it, but the bigger idea And where they were having more challenges today was with what we were considering sort of the foundational features sort of a necessary means to in it. They told us that those foundational features were causing the most issues. And if we could fix those, they could build upon it to accomplish our original big idea, as well as many others. And we ended up pivoting and making those foundational features the product, which was our marquee launch last year. Sarah Flamion The third strategy I wanted to talk about today is depth analysis in this project, so the primary goal was to understand existing contacts from a customer perspective. And we were talking to marketers who were trying to send emails using different types of data. The first part of our analysis was done following into Young's mental model framework. I'm not going to go into deep details about that here. There's lots of articles about how to use this technique. But my research partner and I conducted interviews with customers where we dive too deeply into their current processes. They describe for us how they completed tasks today, and we can pile those responses into a visual that could help our stakeholders understand big process phases for our customers. And within each phase, we looked at the primary tasks, those were broken down further into specific activities. And we call that additional details such as current tools used and the current roles. And Bob, this just gave us a visual way to explore current context. The analysis was done in Excel, and we created this visual with lucid chart. The technique I wanted to explore today, however, is the gap analysis. Where are there any gaps in the way people were talking about this process, or their challenges today, as compared to the way they described their current process context. We went back through the qualitative data and we use Excel to tag it that tags might differ depending on your project, but these are representative of some of the things that we tagged. We tagged frustrations that customers Express goals they were trying to achieve use cases they were sharing that could really illuminate specific examples, tools they were using and emotions that were being expressed. Now, in addition to providing great insights for the team, so top frustrations, common goals, topics driving strong emotion. This also provided great detail for a gap analysis. We took the tags, and we compared them with the current context we just outlined and looked for gaps. So as an example, in this study, and discussion, participants expressed a lot of frustrations with data management, data not being available, data being unclean delays introduced because of data latency, things like that. Interestingly, the gap analysis showed that almost none of the current activities that they listed involve data management. So this gave us a great example of sort of a hidden success factor. eventual success depended heavily on something that they weren't even explicitly talking about as part of their existing process. acting on gap analysis can take many forms, gaps might be activities that needs to To be incorporated in an eventual solution like what you're seeing here, guests might highlight coordination needed between teams, or guests. I highlight holes and understanding places for instance where customers understand parts but not all of the problems face, and they might need more upfront explanation. Lastly, let's talk about problem space exploration. So you might ask be asked to explore new product space, where the problem itself may not be well understood, or where there's disagreement around the priority of existing problems. One way to tackle this is with guided discussions to enable yourself to find richer insights during synthesis. Sometimes I like to pair facilitated guided discussion with worksheets. So in this example, as the conversation progressed, participants would be given time to jot down their own specific answers. Pictured here are two front and back worksheets that I use in some research sessions at a recent conference. I did this in person but I have seen some similar Activities done remotely. So in one participants were given a link to a survey at the beginning of the session. And as it progressed, the participant was instructed when to move forward with the question. I've also seen this done with spreadsheets where participants were periodically asked to enter data in specific columns. When I conducted synthesis on this, I didn't have multiple kinds of data to work with, I was able to use some data to share quantitative insights backed by qualitative description. So for example, data around how customers expressed their existing level of knowledge around a product space. And then qualitative details describing why they rated themselves that way. I was able to use data entered on the worksheets to create charts. So top challenges in this example the number of attributes that participants were using to create marketing audiences. And then all of that information can be supported by insights from the discussion, illustrative quotes, interesting themes that surface Sarah Flamion Another technique that you could try is explore Problem spaces with storyboards. So it's probably abundantly clear that superior drawing talent is not required. I just drew the stick figure diagrams and Adobe draw. But the idea is that you use a storyboard to represent each problem. We then had customers describe things like whether or not they experienced the problem. If so, how frequently, how severe is the problem when they experience and in synthesis, we were able to put some quantitative understanding around the problems themselves. Then the second part of the activity really drove some interesting conversation. We just use Google Slides and we created a slide for each problem that had textboxes, each of which depicted an idea we had for how we might solve that problem. And on the slide, we had images of a thumbs up a thumbs down question mark, and an award. During conversation customers were able to upvote or downvote ideas, indicate ideas they didn't understand and pick a favorite idea. The dialog as they were making these choices was incredibly interesting. And we also explore Any ideas that felt surprising or that were missing? During analysis, we were able to give lots of quantitative detail. So favorite ideas total percent up voted versus down voted most resonant problems, we could count how many missing ideas were surface and use that as an understanding of how comprehensive our idea sets were. And then we were able to provide detail to stakeholders about additional themes that surfaced like skepticism for how well artificial intelligence would work in practice or excitement around big concepts that allow for experimentation. And we were able to give big picture assessments around things like how well we understood the problem space. Now I'm going to let Kellie talk to you about some strategies you can use with large customer events. Kellie Carter Now imagine five countries and five in nine days with over 170 customers and internal team members, or three three day training sessions across three different cities. With over 80, early adopters, or end evangelists for your beta product, this is my year last year as we did a run up to a big product launch. This was both an unprecedented opportunity for me as a researcher to learn a lot from customers in a very short amount of time. But it also had its own unique challenges. Primarily, the fact that there was a lot of big scope involved a lot of disparate data sets, and a lot of different audiences who wanted to understand and learn from our findings. And these were all as well unique synthesis challenges. So first, I was part of a ride along in most of these events, which meant I was not really in charge of the end to end experience, the agenda or the participants that I want. Just deeply embedded in all of the aspects. So as I was writing long and engaged with the different product teams and going across all these different spaces, I was embedded in all aspects of the project. Kellie Carter This allowed me as part of the team to be help help with the planning and create the schedule and the focus and object research throughout. On the trip. We had lots of opportunities to listen, learn and engage with customers and conduct research. And then as because we're so embedded in the end to end process as a research team. We were also on stage both with customers, as well as presenting the findings of the information And throughout, we were developing partnerships that provided excellent colleagues, sounding boards and synthesis collaborators to do debrief sessions, and conduct synthesis in taxis, trains, airport lounges, and hotel lobbies. being embedded meant helping build the agenda, and that's where one of our key tactics was is that is weaving different types of research into the entire set of events. For example, our event scheduled for three day training looks something like this, including a whole day day long hands on enablement, also introduction use case of roadmap reviews and throughout these what we did We incorporated different research tactics into micro moments, or small scale research that we could do that allowed us to be engaged throughout the event. Kellie Carter So for example, through the lens of research, we made sure that we were capturing valuable customer inputs and artifacts. Like when we were doing a use case overview. We had participants create their own use case proposals. And then we captured those for additional feedback and analysis as part of our overall understanding of what customers needed out of our product. For the day long hands on learning activity, in which users did an end to end setup an implementation of a fairly complex data product. We used a simple Google form to input all kinds of different issues that we were seeing from usability bugs, and issues, documentation, feedback, and even engineering defects. What and using this form, we were able to speed input and categorize the data. And well, a couple of us researchers are running around kind of capturing these on the fly in the in the training room. We were also having customers themselves input if they found things and we weren't there. So we could even do this remotely as we were going through and capturing all of this feedback and data. Another tactic we employed was incorporating quick surveys throughout the schedule to elicit key insights from participants. So in this case, we were doing Some product prioritization, which was just really a quick get feedback survey, where they could very quickly go through and say whether or not similar to what Sarah did, whether or not they didn't want to use this product or this feature, or needed it sooner rather than later. And this allows us to gauge success of any of these discrete moments in the schedule, as well as not have to rely on user recall at the very end of a three day process. Kellie Carter We have one dedicated half hour session at the very end of a three day where we got to get feedback on the overall session and product. But we also wanted to really measure KPIs in our training primary KPIs were how confident the participants felt Both explaining and implementing our new product solution. And so being able to gauge those key KPIs as well as just how they felt about the training, were really important for us, because they allowed us to get real time feedback on to make improvements between the sessions, and also start to plan for our follow up after we got done with these training sessions. But really, the biggest data collection method that we did was listening and taking notes throughout the entire three day sessions. And we did this I did this both remotely by listening into the conversation from my desk for three days and in person. And we did this often with not just me listening or another researcher, but also members of the product engineering and Documentation teams who are attending or listening in as well. So we ended up with a communal document with all kinds of customer questions, concerns, and different viewpoints from the different team members and their respective disciplines. As you can imagine, we ended each of these sessions and the overall process with massive amounts of unstructured, messy data from those qualitative notes from all the different team members to multiple survey instruments and the Google Form spreadsheet. But we anticipated this so we prepared for it. We created an initial tagging structure ahead of time, that had things like feedback type, user type or functional area And we baked these different tags into our different inputs. So for example, the feedback type was very prominent in the Google forum, we were capturing all of the different bugs and usability issues. We had user types to document the use cases, and understand what kinds of perspectives the customers were bringing to. And we were able to document as well functional areas in the new product. And having all of these tags initially set up and tied to our input instruments allowed us to very quickly sort and catalog categorize initial data coming in. This was important because we didn't always have the luxury of time. One way we tagged some of our findings was by audience who was invited rested in it. So we could quickly produce findings for teams that needed this user input for their work. Kellie Carter For teams like engineering and documentation, who were in the process of finalizing their components in the move to the MVP launch, we provided a bug list and a documentation sheet that was presented almost as soon as we were finished with the the sessions. The events team also needed immediate feedback to iterate in the week between events. So we sliced studies, we ended up using a lot of our tagging and pre planning to slice and dice all this data into findings that we could quickly disseminate to the needed to the teams that needed it immediately and then have additional delivery. worry that we needed for leadership and product and design, as we have a little bit more time after some of these events. But again, the key here is that we had all this amount of data that we needed to make sure got to the right people in the right time, and at just in time delivery. Sarah Flamion So in all of these examples, there are some common principles we try to abide by, based on lessons learned, as I hope was obvious, a lot of our recommendations centered around thinking about the types of synthesis we wanted to conduct, and using that to frame our methodology choice and how we collected that data. So thinking about the story to tell and then building to that. Here are two examples, one where we might have made some changes in hindsight and one where we we really pre planned successfully. In this first example, Kelly and I worked on several projects where we were talking to customers who manage data for their organizations, we were exploring current contacts and we definitely collected data around that. But as the sessions went on, it became obvious that these customers represented a new type of persona that wasn't in our existing set. In retrospect, it would have been really useful for us to ask some additional questions that are common to persona explorations. Things like what department the customers reported into it, how they become and stay educated about their domain. Had we thought about that persona exploration as part of the story we wanted to tell. We could have asked more questions and had a richer story to tell in the end. As an example, where we did think early about the synthesis desire, we were leading an exercise at a conference booth. We knew we wanted to use this exercise to help illustrate how customers were feeling about various stages of cross functional projects. We crafted an activity around emojis so customers walk through a worksheet with us, and they used emoji stickers to explain how they felt about various stages of a project. They identified, this was a big hit with customers love the emoji stickers, I really can't overstate that. And it allowed for a pretty fun synthesis. We made an emoji legend where we map some emojis to positive, neutral or negative feelings. And then we tallied those emotions from a sample of participants and some clear trends emerged. So for example, almost three quarters of participants expressed negative emotion about collecting and integrating data. While this was done in person, similar techniques can be used remotely. So for example, you might use survey questions that include a mode of scaling, or activities like the early up and down voting example that we talked about, but where customers are selecting for more emoji options. Kellie Carter People are talking a lot about resilience these days and it's a key skill to have when you're dealing with messy indeterminate spaces. And one of the other key learnings we have is just being flexible and resilient. As the researcher on these big customer events, I didn't have a lot of control over all of the things. But the places where I did have control, I tried to maximize our research inputs and outputs and provide as much structure as possible. This meant being very flexible. We started with a whiteboard in Vancouver with just a small core team of four of us. And as we moved through creating draft plans and adding more people in as we continue to evolve on our our schedule and how we were presenting our materials, with the research needs and how we might what kind of data we might capture changed over and over again. So by the time we finally got to our first agenda, and presented in San Francisco, we have a lot of people, things have changed greatly over the three weeks that it taken to plan and stand this up. And but we still managed to include research in a deep way because we're flexible and resilient and making decisions that were always best for our synthesis needs. The other thing it's important to think about is, again, how you present what you're going to be doing after the fact. So when you're synthesizing be thinking about the narrative you're about to tell. And also imagine what kind of story you need to be presenting to your different customers or your interest stakeholders. When I talked about those KPIs for the training, we were it was important to note that our participants were saying that they felt really Good about talking about the product, but really poorly about implementing it. And so as we were presenting these findings, we were going not just from the who, what were the tactical pieces or the factual pieces, but we had to really tell the story that allowed our stakeholders to understand what we needed to do in order to fix that just for for the customers and participants in these sessions. Sarah Flamion And then lastly, don't forget that when you share your findings, it should be fun. You work hard on this synthesis, and you want to make sure that you present your insights in a compelling way. So think creatively about the visuals you use to convey your findings. When your stakeholders are more emotionally engaged in your narrative, the insights will stick in their memory longer. So this might be a fun theme like it's depicted here. Or it may be Take the form of interesting compelling content like video clips, memorable quotes, clever diagrams, I sometimes check out diagram or.io for ideas, or you can use tools like storytelling with data to think about how to present data in the most compelling way possible. Kellie Carter In the examples of learnings, we've shown the key takeaways never stop thinking about synthesis. The questions you need to answer the data you need to collect and how it will come together in a powerful and compelling narrative. All hinge on your ability to make sense of things. The terms about comes from Latin separ, which means to be wise, combined with synthesis which means to put together we've shown you how to be wise and putting together your data in the end. And we hope that we've shown you some new techniques and methods that you might use in the next time you You approach learning from your customers and presenting what you've learned. Finally, just thank you for your time and interest today and I think you'll have the option to ask us questions and through slack Sarah Flamion Thank you Transcribed by https://otter.ai