2012 Main Conference Talk
In 2010 the new AOL leadership created the Consumer Experience group to put the consumer at the heart of the product design and development process, and to ensure that AOL ships only high-quality products. One of the tactics we adopted to address UX-related issues, large and small, was to focus on fixing the most basic broken experiences first. This established a quality baseline, and created a culture of strict attention to detail and constant pitching in together to fix what needs fixing.
Some of the questions we are going to address during the talk are:
- When joining a team or project that needs a UX “cleanup,” where should I begin?
- Can talking about the broken experiences broaden the conversation beyond UI bug-tracking and terrible experiences that are nonetheless “working as designed”?
- How can you get buy-in from and engage the entire company in fixing broken experiences?
- Do you get more bang for your buck with many quick fixes or by tackling bigger problems?
- What do you do when fixing a problem opens up organizations cans of worms?
About the speaker(s)
Gabi Moore is a product designer with 15 years experience designing, prototyping, and building digital products. She’s worked with companies like FutureAdvisor, AltX, AOL, and SalesCrunch.
She was born and raised in Rio de Janeiro, Brazil, spent some time in England and Australia, and New York, and recently moved to Portland. When Gabi is not working, she’s either traveling or looking for a new Lego set to add to her collection.
Christian Crumlish has a decade’s experience leading product teams, with several decades of UX success behind that. He is currently building a product team at COVID Safe Paths, a volunteer open-source nonprofit privacy-first contract-tracing solution for Coronavirus. He consults on product and UX leadership at Design in Product, where he also hosts a community that explores the overlap between UX design and product management.
Formerly, he was head of product at 7 Cups, winner of the 2016 Stanford Medicine X Prize for health systems design and a 2019 World Economic Forum Pioneer. He has also co-chaired the monthly BayCHI program, was a director of product at CloudOn, was director of messaging products for AOL, was the last curator of the Yahoo design pattern library, and served two terms as a director of the late lamented Information Architecture Institute.
He is the author of the bestselling The Internet for Busy People (McGraw-Hill) and The Power of Many (Wiley), and co-author of Designing Social Interfaces (O’Reilly). Christian lives in Palo Alto with his wife, Briggs Nisbet, and an ever-growing collection of ukuleles.
Gabi Moore: Thank you all so much for being here. I can’t believe you chose to be here instead of listening to Josh Clark, so I really appreciate it.
So, we’re going talk about Broken Experiences today, fixing UX one pixel at time. For those of you who don’t know me, most of you probably don’t, this is my first time talking at a conference, so.
Gabi: Thank you.
I’m Gabi Moore. I work at AOL. Been there for about a year. I work with the amazing designers in the AOL Mail team. And this is Christian Crumlish.
Christian: I’m Christian Crumlish. Gabi used to work on the Consumer Experience team. I recruited her, in fact. One of our proudest accomplishments there. And I am currently the senior director of product for the AIM team.
Gabi: OK, so just to give you an idea of what we’re going to talk about. Broken Experiences was a really simple WordPress blog that the consumer experience team set up. And basically, the idea was that anybody who worked at AOL is an internal tool. If you’ve found anything that was wrong or broken with an AOL product, you could go in and submit a post and then that would get communicated to the product teams and hopefully get fixed.
So, it was one of the first projects of the consumer experience team. And the idea was that we needed somewhere to sort of start a UX clean up, basically.
So, the things we’re going to talk about is how we started, some of the issues we dealt with, the process that we used to deal with those issues, the site itself, and how it evolved. And then, we’re going to end with some of the lessons that we learned.
So, I’ll let Christian start talking about how it all started.
Christian: To begin with, I came to AOL in, January, February, March, April of 2010 to work for Mahdi Shankar. One of the things Mani told me on my first day was that there was a rule on the consumer experience team, and the initials for the rule were AMM. And AMM stands for “Always Mention Mahdi.” He was teaching me that, in the corporate environment, when you have success, you need to promote your boss and your team. He always mentioned Brad who is his boss, and he goes all the way up the line like that.
The other rule was MMLG, which stands for “Make Mahdi Look Good.” That’s why we have a nice picture of Mahdi here on the screen. The consumer experience team was brought in to help AOL to turn around and to try to improve the product development process by putting the consumer at the heart of the design and development system, essentially to bring UX thinking as much as possible to the whole company.
Mahdi always explained to us that there were really two missions going on at the same time. One was to put a floor under the quality, to say that there was a level of quality that was a baseline that was expected everywhere. The other was to help people become awesome and reach for the sky.
Those are really two different goals entirely, and I think it’s important to distinguish those things. It’s one thing to be fixing typos and making sure that stuff doesn’t break. That just gets you in the game, that just gets you to the point where you’re not dismissed as being crummy, or something like that.
But that doesn’t necessarily make you win, that doesn’t make you awesome. Really, we’re not talking about awesome. We used to talk about…Besides broken.aol.com, we wanted to also put up a site called awesome.aol.com. That half of the project never actually happened. Anyway, that’s some background there. OK?
The second thing was that the CEO of the company, Tim Armstrong, sometimes used to talk about the idea of what was called a “wall of shame.” He felt that there was a lot of subpar products that AOL had been willing to ship or that had once been good but that had slipped in quality and that we as proud employees of AOL we shouldn’t be comfortable with products that were out there that didn’t work well, that confused users, that were broken in some way or another.
And he actually talked about, we should have a wall of shame. I think he was literally thinking there should be a physical wall in the New York office where bad things were posted. But that idea was sort of the seed or the spark of what it actually became, the “Broken Experiences Project,” which was to have an internal tool where people could mention that something was broken or shameful or embarrassing and help to get it fixed.
When both Mahdi and I had worked at Yahoo, and Yahoo had internally a fire bug plug-in for the Firefox browser that put a footer on every Yahoo page that enabled anybody to report a bug to Bugzilla. That was another one of the inspirations. This idea that while you’re in the company and you’re using the company product, if you see something that’s wrong, you really need to report it and get it fixed, because who else is going to notice that?
We really wanted to get to the point where every team was always checking their own products all the time on every device and finding these things, but we wanted to start somewhere by doing it ourselves and eventually spreading out to the whole company.
One of Mahdi’s first projects, even before he brought me to AOL was just this massive cleanup, where he posed with a janitor’s mop and he kind of invited the whole company to find typos and broken links and get them all fixed. They got a tremendous number of them fixed in the first couple weeks, and then 90 percent in the first month or two.
One other thing about the Broken Experience idea was that there was a little bit of a debate about whether the site itself or the wall of shame concept should be made public or should be private. And ultimately we went with the slightly more conservative approach that we will share this information internally, but we’re not going to broadcast to the world every shameful thing that we discover across the AOL network.
Even when we did launch Broken Experiences…I don’t want to get ahead of the story, but when we did that, there was some worry. I mean, the PR folks were worried that it would leak out, that it would become embarrassing, that people would share the screenshots or something like that. But our attitude was these broken experiences are public. They’re already out there. We’re not really making anything bad. We’re just reflecting on what’s been going wrong.
A couple other things. As I mentioned, we had the idea of bringing up the baseline level of quality and also having ways to shoot for the stars. That’ll be another talk at some point. We tried a number of different things. We created a document called “Basic Rules for AOL Products.” And we boiled it down to, I think, 41 rules or something like that. We circulated that and we did a road show.
Essentially we were experimenting. We were trying anything we could do. We worked with the HR team to rewrite the UX and product manager grid, the staffing levels, what sort of requirements you want to look for in hiring. We touched in some ways the whole company in trying to improve product quality.
Just one or two other points about why we choice the name “Broken Experiences.” Obviously, we wanted to focus on user experience or our team was called “Consumer Experience,” so there’s this idea of “experience” as a very broad concept of what’s going on. We didn’t want to say “Broken UI.”
It wasn’t just about rounded corners that were off by one pixel in certain browsers or links that were broken and it was about those things, those things were included. But we also wanted to say if the user gets confused and ends up in a loop and doesn’t know what to do next, just because it’s working as designed doesn’t mean it’s a good experience.
We didn’t just want to put it in terms of bugs because bugs are very narrow. A bug is essentially it’s designed to work this way and it doesn’t work this way. We wanted to say it might be designed to work a certain way but that’s no good. We figure that broken experience as a bigger, broader concept that got us a little bit away from just the idea of technically fixing bugs.
Also, we used WordPress because we wanted to be quick and nimble about this and not build some kind of robust, complex enterprise software, but just quickly stand up a blog and start using it right away. It got embellished over time, and Gabi will talk about some of the ways she made it awesome.
At first, it was the simplest thing to do and it was a very manual process, as well. We would find stuff or people would report stuff to us and then we would track it down, take a screen shot, describe the problem, find which product manager or UX person had some authority, tell them about it, and then nag them about it and then check on it and see if it got fixed.
We brought in this incredible intern from Stanford a year or so ago named Bree Bunghi and she developed a reporting tool so that on the end of every week Mahdi could say, “There’s been this many broken experiences captured, these many have been fixed, these many are still left,” and what the rate was and what the turnover was from the previous week.
One trick that we played on new team members like Gabi when we hired was, as a way of learning the company, we’d say find 30 broken things across AOL as an excuse to get them to learn all the products and understand what they are and check them every day, which is part of our day. I think Amy cheated and found all the errors at Movie Phone and Map Quest and that wasn’t really fair. It was a good way to scour the site and find the problems.
At some point, as I said, we recruited Gabi. I actually went on a UX portfolio review trip to New York early on looking for good freelancers that we could bring into the company, if at all possible, and went through my network. I started with Whitney Hess but I talked to a lot of other people as well. Probably 17 different people told me that Gabi Moore is one of the people you definitely need to talk to when you’re in New York.
Gabi came in. I looked at her portfolio, I put her at the top of the list. Then when we started getting Rex to hire people we went to New York and we interviewed some folks and we hired Gabi. As Gabi got on-boarded into the team and integrated, one of the first thing I did is give her booking experiences, so it wouldn’t be my responsibility anymore and she took it to a whole other level. It was one of the best decisions that we made, I think.
In terms of taking it to a whole other level, I’m going to turn most of the rest of the presentation over to Gabi at this point.
Gabi: Alright. I wanted to start by just showing you guys some of the issues that we dealt with, the issues that were reported every day. They go from things that are really simple to things that created really long threads and discussions and arguments.
Gabi: Yeah, pushback. One of the most common ones was, of course, typos, especially after AOL acquired the Huffington Post about a year ago. The Huffington Post turned out to be a site that AOL employees already were users of so we started getting a lot of reports for that. This example is actually from Patch, which is a local news network.
The other big one, of course, was broken links. Especially with articles that come and go, there’s new links being posted every day and a lot of times we don’t know that they are just broken or they break the next day and so on.
To solve this problem, we ended up partnering up with the analytics team and they started creating daily reports for broken links across AOL products and the specific teams could go on and see their broken links every day. That’s something that they’re still doing and only started doing it when we were running the site, but now it’s become broader and a better experience overall.
Then the third one, of course, is bugs. Those are the most black and white ones. Also, typos and broken links are, too, but these ones are pretty easy to get fixed. They would usually get fixed within a day if they were considered to be really serious or really detrimental to the experience, but mostly within a week, I would say, tops they would get fixed.
A lot of times, as we went along, we would learn to go straight to the developers with bugs instead of going through product managers or designers and just knowing who to go to and the person who had the power to actually get it fixed really helped with it a lot.
Then after bugs we have some more uncommon, and the most interesting, I think, types of issues we dealt with. The first one was branding violations. With a company like AOL, when I joined I think we had about close to 200 brands or something like that. Now it’s a lot less. A lot of them have been consolidated, a lot of them have been folded into the Huffington Post, but there’s a lot of guidelines.
For example, there was a site called popEater, which I don’t know if you can even read there, but the E was supposed to capitalized and a lot of people didn’t know that. Not capitalizing the E is a brand violation and we kept an eye for that. At one point, I actually knew all the little words that had to be capitalized and the ones that didn’t for all nearly 200 brands, but now I can’t remember them anymore.
The other one was editorial issues, and these are more subtle, a lot more subtle, than typos. In this case, this photo actually had the female CEO mention the caption, but the way it was cropped she ended up being left out. Because it was automatic cropping, that kind of thing happened. That’s when editorial team needs to come in and fix it and they did pretty quickly.
Christian: People got really mad when the wrong winner of “American Idol” would be shown in a photo.
Gabi: Yeah. Yeah, that created a lot of discussion, actually. They would show the bottom three but say it was the bottom two and whatever. Then there were the visual design issues. In this case, it was just the symbol that was chosen, which was the little arrow to the right, it looks like you could click on it and then it would reveal something and rotate but nothing actually happened when you click on the arrow. You had to actually click on the text. Just by simply changing that to maybe a bullet point or nothing could solve the problem.
Last but not least, and these are my favorites, were the experience issues. In this case, this is a screenshot from MapQuest, their mobile site actually, and it shows results for CVS pharmacy in New York. The issue was that there is no way of knowing which part of New York it was and if you live in New York or been there you know that there’s five different First Avenues so you need to know if it’s in Brooklyn, if it’s in Manhattan, where it is. Just having the city as part of the data shown and the results was something that could improve the results a lot for big cities.
That kind of issues was never black and white and it always created a discussion over email. Sometimes the product teams would not agree that that was a broken experience or that that needed to be fixed. Our role was to give as much evidence or as much suggestion as possible.
One of the things that really helped was to keep the person who reported the issue in the thread because the person who reported it as the person who was most frustrated with it. They would actually come in and defend the idea that that was broken and needed to be fixed. A lot of times it was really a simple fix in this case. It would just be adding one piece of metadata and maybe only for locations where that kind of thing is a problem.
OK. So next I’d like to talk a little bit about the process. Christian already mentioned how it worked, how people emailed us, mostly at the beginning, the issues. Then we would go in, publish it on the site, and then pass it on to the product teams and do everything that we could to make sure it got fixed.
But there’s a few little details that I would like to point out and that I think were really important for the success of broken experiences. The first one is eating your own dog food. So what we try to do is to treat broken experiences as a product that we wanted to design and make sure that it was the best experience for its users. In this case it was AOL employees. So we experimented with things like the structure of the posts themselves and what should go on each post.
Christian: Once or twice people reported that there were broken experiences on our site, too…
Gabi: Yeah, and we would post those.
Christian: …and we posted them and fixed them, yeah.
Gabi: Yeah. So the first thing is clearer headlines. Basically, you should be able to read the headline, and you should be able to look at the image and know what the problem is. We also made sure we always put the name of the product on the headline. So as a product manager, you could go in and just instantly see, “No, this is not my product” or “This is my product.” We would also indicate whether it was fixed or not right there.
Christian: Each product was a category in the blog. So a product owner could bookmark that page and just get their own stuff, too.
Gabi: Yeah. And they could also filter by only broken issues in their product, so we could actually, we would send that URL pretty often to product managers.
Second thing is, keep the description itself really simple, extremely objective. Sometimes we would get really personal accounts of the issues in first person, and we turn them into third person. Sometimes we would mention the name of the person who reported it so that they could be reached if necessary and we would always link to the URL where the problem was found.
And lastly, always an image. We tried everything we could to get an image, because it’s the best way to communicate what the problem was. The idea was that the description would be kind of just optional for you to read. Just with the headline and with the image, you should be able to see what the problem was.
So, every single post was structured this way. And when people would submit it, what would happen is, those submissions would be saved as a draft in WordPress, so we would actually have to go in and add each one to make sure that it fit this structure. And I think the result was, it was a blog that was easy to scan, it was easy to see instantly what the problems were. And it also kept a certain level of quality and what you could expect when you’ve read an issue there.
The other interesting thing was our definition of what is fixed. Usually, when you have a bug tracking tool or ticketing systems, what is fixed is pretty clear, right? You have to basically almost go for the ideal solution. And for us, sometimes this is what fixed meant, even if it was just temporary.
So, basically, you turn something that was broken to something that was not broken anymore, even if it meant a little patch. Or a lot of times, we would suggest just hide that feature, hide that button altogether. It’s better to not have a feature than to have a feature that is broken.
Christian: We had a problem, like an example at one point was that a new version of Safari came out and our Flash slideshow plug-in stopped working in Safari. And the person who had coded it didn’t work at the company anymore and it was actually hard to fix. So, there was a very long discussion thread that lasted about a week as people debated how to fix it and what to do about it.
Gabi: Yeah. And that caused some reluctance, but a lot of times, it was like, “Oh yeah, I hadn’t thought about that.” That’s cool. I’ll just change the messaging a little bit, just add an error message, which can be frustrating, but it kind of fixes the experience, at least temporarily.
So, the site itself. When I started working on Broken Experiences, it was, like Christian mentioned, a very basic WordPress site. And I had worked with WordPress before a little bit. I knew a little bit about the PHP tags and all of that.
I just started sort of hacking it and seeing what else I could add to the site that would help us have a better idea of what was happening and would also help product managers and designers going in, sort of find the issues that their products had.
So, one of the things we did was to start actually tracking stats. So, we created this little dashboard, and you can see the number of fixed issues, the broken issues, and the fix rate. And we actually set up a goal of increasing the fix rate, which I believe was around 60 percent at first. And then, it went up to like 78, 79, and it was 79 forever. And I was like, “When is this damn thing going to get 80?” And eventually, it did and it was pretty cool.
Christian: We were shooting for 90, I think, was our goal.
Gabi: We were shooting for 90, yeah. Yeah, we got close, 85. And this also helped circulate a little bit, like, these numbers ended up escalating all the way to the CEO at one point.
The other thing that we did was to create a little bookmarklet. The way people submitted at first was either by email or using a form on the site. And we created a bookmarklet, like, that you could just drag onto the bookmarks bar in your browser. And as you were navigating through AOL sites, you could just fire it up and report something right there. So, we would detect the URL and also, your browser information and you just had to fill in the problem and attach a screenshot if you wanted.
So, that started getting used a lot. And we actually used that opportunity to promote Broken Experiences internally a little bit. And we got some support from the marketing team, and they created an awesome video to promote it that I want to show to you guys. Let’s see.
[ video playing ]
Man: The days all start in the fog of my mind,
I wake up, eat breakfast, I get online.
I head to my sites across AOL.
Editions, burlesque, and always Jezebel.
Patch in my mails and the Huffington Post,
The browser never gates the sites I visit the most.
Sometimes I find I come across an issue,
But sadly never knew whatever to do.
Links that don’t work, bad captions a typo,
Or the times you get a video that won’t load.
What do you do? How to report?
The answer’s here, what to do is back in your court.
And now you’ve got a bookmarklet to make things right.
Drop it in any browser’s latest version,
And click on you see stuff that just ain’t working.
Help us help you help them,
Download the bookmarklet, my friend.
Because details are the way we get to premium.
The element for sloppy is obscenium.
Expect a lot but don’t lose it.
Download the bookmarklet and use it.
Man: Oh, sometimes I get a good feeling, yeah.
[ video ends ]
Gabi: That was…
Christian: It went slightly viral inside the company.
Gabi: That was Ben Hudna, who’s amazing. So another thing that we had to constantly do was to adapt to the different issues. One of the things that kept our fixed rate from going up a lot of times was the mobile issues, because, especially with iPhone apps, it takes so long for updates to be approved to the App Store that the issues would be fixed, but if they were not out, we wouldn’t count them as fixed because they weren’t fixed for the user.
So we ended up creating a separate category that was awaiting release. So something was fixed, it was just not release yet. So that was one of the ways in which we adapted to the issues that we found. And there was a series of learnings from dealing with the different teams and because the process was so very manual, we could really sometimes craft emails for each time.
For some teams, I could just send the link without saying anything, like here’s a link to a broken experience on your product. For some teams, I had to actually include the summary of the problem on the email. Some teams, I could email the whole team. Other teams, I had to email one specific person, otherwise I would never get an answer.
So always sort of paying attention to who we were dealing with and what was the best process for them, so that we could all reach the goal of getting things fixed. Eventually, the broken experiences site found a new home. This is a screen shot from a presentation that the new team who took over created. And now it’s a whole team, which means that they can handle a lot more issues, and it’s been coming in and being fixed, and just the scale of it is a lot larger now.
Christian: And we consider that a form of success, which is to take something that was basically an experiment by our little team, trying to hack the culture of the company, and to have it operationalized, to have it actually became a permanent part of the enterprise.
Gabi: Yeah. So I wanted to finish with some of the lessons that we learned. The first one was that all experiences count, including weird things like the milk situation. When the Huffington Post team moved into the New York office the fifth floor, which used to have let’s say 150 people now had 400 which meant that we would run out of milk on Wednesdays instead of Fridays.
That was a huge experience issue. People couldn’t work without coffee. That was one of the things I dealt with, was making sure that there was milk in the fridge.
Christian: People complained, too. I remember there were those doors in New York that were glass that say push but then they say pull on the other side but it’s glass. Even though it’s reversed, your mind gets confused. Three or four people reported that was a broken experience.
Gabi: We became a hub for any kind of issue across the company. The position of the nap rooms and the nap pod, all of that. It was all fair game and we always fixed them.
The other thing was that mechanism alone is not the solution. Broken experiences by itself was good, but without a series of other initiatives it wouldn’t have been as successful. Do you want to speak a little bit about the speakers’ series?
Christian: I’ll say a few words about this. This is mostly about what else we did besides Broken to try to help with quality across the company. I mentioned the baseline versus the blue sky. We did try to do some things to help people aspire to be better designers, to make better products, to learn about best practices, to update their lingo, anything like that.
One thing was that we organized a speaker series and we tried to bring in awesome speakers every month. We started with Luke W. and we set the tone with that. We did a lot of helping teams out. We would do product reviews if people wanted them on the spot. We would give short term help but we would also do longer term engagements and embed on teams and help them the way consultants do.
We sponsored events to help create more communication between the internal AOL culture and the external UX and product development world. I organized a product summit in I guess it was 2011 where we brought all the best product people from around the company and put together a two day conference in the Palo Alto office and we got them really drunk and got them all to meet each other.
We actually locked them in a room with some bourbon so they could just bitch to each other and get all of their issues out. Then we brought them back the next morning and put them in the room again with some more bourbon so they could talk about how to fix all those things they had been complaining about the day before.
We helped out with the un-university, which is an internal speaker series who gets people who are talented to talk to other people in the company.
We did stuff that didn’t work. Like I created a product launch checklist that I worked on for probably nine months that really never went anywhere. I wouldn’t say it was a waste of my time, but it didn’t really have the results. We were experimental and we were willing to say that didn’t work and just throw that in the trash and move on to something else.
Gabi: The last thing is no one size fits policies. With a number of different products we have at AOL we had to adapt and we had to listen to them and we had to do the best process for them so that we could all reach the common goal, which was to have less broken experiences in our products. Thank you very much.
Christian: I think we have a couple minutes if there’s any questions.
Gabi: Yeah. If there are any questions or anyone has a similar experience of fixing broken stuff. Alright.
Audience Member: [inaudible 00:28:37]
Christian: That’s a good question. We didn’t have a formal credit of who reported it because we caught a lot of the stuff ourselves. We did tend to say John Smith noticed that over on AOL Sports there’s a broken link or something like that or used a blog format of saying it came via such and such a person and then we’d link to their Atlas page.
Atlas was another project we did when we started, which was to take the internal phonebook and turn it into slightly more like a profile page, kind of an internal Facebook-y thing. We mostly wanted to give credit to the people who were finding stuff. We weren’t worried that people wouldn’t do their job and just spend all the time browsing the site.
The problem was more the opposite, that people weren’t reading the site. The company they worked for, they weren’t using the products or at least critiquing the problem. We actually wanted to encourage that.
At one point, we never did this, we were going to send T-shirts out to people who had been reporting the most, something that says “I fix shit” or something like that, but we never actually did that.
Audience Member: [inaudible 00:29:47]
Gabi: The question was would people submit requests for features or suggestions or things like that? Yeah, they would, and we would always forward it to the relevant product team. We would just not post those on the site unless it was evidence of something that was broken and then we could actually go find what was broken and post that as a suggestion for how it could be fixed. We tried to focus on what was broken first.
The suggestions could be one way to fix it but not necessarily the best way.
Audience Member: Hi. This is super awesome. I’m new to a company where I have similar kinds of challenges. And it’s very interesting to see the categories of things that you considered and they are somewhat mutually exclusive so you can think about them differently. I’m very curious about what knowledge and tools did you guys need to have to just understand the ecosystem?
You mentioned branding violations. I assume you needed to know all the brand style guides to be able to assess those things. What kinds of knowledge did you guys have to acquire to get to that point?
Christian: That was part of the team culture of consumer experience and part of what Mahdi instilled in us which was that we had to study the company, especially when we came in, when we joined the company. We had to learn everything. There were brand guidelines. We were supposed to read them and understand them.
There were taxonomic guides to all the AOL sites that were always out of date, the moment they were printed they were out of date, but we were supposed to know that.
When we first put Broken up we needed to have product contacts at all the various products and I would go to Mahdi and say who’s the product contact for such and such and he’d say, “Don’t you know?” It was expected that I should have made friends with that team and gotten to know them.
The list was initially built on the personal contacts we had made just as we were consulting across the company. We were the evangelists. We took that upon ourselves. That’s not a real answer, because I didn’t really tell you how. We just set it as an expectation, internally to our team, that we were supposed to master the space that we were…
Audience Member: How long between you guys getting started and actually launching and starting getting things?
Christian: I don’t know when Broken Experiences started, but it was probably a good…Between six months after I started. I had enough experience to get it going. Over time, we formalized that product contact list, checked it with people, and it became more of an encyclopedic thing. At the beginning, we definitely leveraged our personal networks inside the company.
We expected it to be a high-touch personal-relationship-type thing. The end state now is that it’s much more automated and systematic, but we didn’t expect it to be like that on day one. We expected it to be really hand-carried a lot of the time.
Audience Member: I am completely fascinated by the fact that, over time, you found people submitting Broken Experiences in their environment.
Christian: That happened right away. That was very early on, yeah.
Audience Member: Wow.
Christian: People were always sure that they should even critique internal enterprise stuff. We would get complaints about the HR side, and stuff like that. We did report that and tried to get that stuff fixed, but people innately seemed to want to do that.
Audience Member: I see. First, I have a comment, which is I think that we often spend a lot of time complaining that people don’t understand what we do, so I commend you for being transparent about what you do. It was clear that people got it and could extrapolate it even further to the greater experience in the environment.
My question for you is to what extent did you feel that it was your responsibility to create a better employee experience, and how did you go about incorporating the feedback that you received?
Gabi: Well, Mahdi always told us that all experiences count. At the beginning, that was very theoretical, but, as we started dealing with things like the milk issues, and that sort of thing, internal the enterprise systems, that became very clear, and I started feeling that every day I would…I don’t know. If we ran out of toilet paper in the bathroom, I would report that it’s essential for employees’ experience.
Gabi: I think it was very much part of the culture of our team. The more we dealt with the traditional broken experiences issues, the more it also became the less traditional experience issues.
Christian: I, personally, strongly believe that the internal experience, the culture of a team, ends up being reflected in the customer experience, that a conflicted team creates a conflicted UI, a confusing one, and a harmonious team creates a good one. I believe that user experience starts in the bones of the product, not on the skin.
Christian: I think that’s just what we believe, but what you’re getting at, and I’m not sure exactly where it came from, was that I think employees intuitively got that, too. They felt empowered to report about anything that wasn’t good enough, and it opened the door to having those discussions, which is pretty interesting.
Audience Member: I have a question about the term you use, “Broken Experiences.” The construct here is that you’re looking to fix things that are, obviously, broken. What about experiences that are not ideal, weird, or not awesome, not as awesome as they could be, that people all in the organization all know about, but there’s no consensus? Was this a venue for that…
Christian: Yeah. You might want to speak to it, but, when we started, and we did debate what to call it. We argued about this for some time, because this was what we considered a baseline thing. That’s why we put it in terms of “broken,” not “subpar” or “not as awesome as it should be” or something like that.
But there’s a high degree of subjectivity there. And we were going beyond just what is a bug. And that’s why some of these debates would happen where the team would say, “Well, this is fine, we meant it to be that way.”
And the person who reported it said, “Well, I got confused and I couldn’t find the movie. And so, to me, that’s broken.” So, we really just focus on an ordinary person, would they be able to use this? Would it work for them? It was about getting to good enough. It was not necessarily about getting to awesome.
We really wanted to keep those things distinct, because often, the basic stuff we were doing would get described around the company as stuff to make things awesome. We were like, “No, this is just making things not shitty.” This is not making things awesome, that’s a separate project entirely.
But calling it Broken Experiences was a way to at least broaden it beyond, is it off by one pixel or something like that. I mean, if it felt funky or weird, we would probably count that as a broken experience.
We’re just about out of time. Maybe one more question?
Audience Member: Curious why you wanted to talk about rewarding the finders. That seems like awesome to sort of step up and say, these people are totally on top of it, they’re finding all these bugs, so our customers don’t have to. I would think you’d want to, like, get on a tall building and shout that to the world. I mean, that just seems like really great stuff.
Gabi: What’s the question about, rewarding the people who are reporting? Yeah, at one point we did have a contest. When we launched the bookmarklet, the first month it was launched, the person who submitted the most issues, they got a 100 dollar Amazon gift card. But yeah, I think we could have done more to reward [inaudible 00:36:46] .
Christian: Yeah, I agree, I mean, the gift card was perfect, right? It’s actionable, it’s not just like, “Oh a free coffee.” It’s like, “Wow, this is actually something substantial. I can go buy some fun stuff with this.” So, I would say, that sound awesome and really sort of a continual culture of bug fixing gets you rewards.
It’s not just an obligation because you work here, blah blah blah. It’s like, “No.” This is like, “Good on you. You’ve got a sharp eye and we’re going to make it worth your while.” That’s awesome.
Audience Member: I think you’re right. And there are these positive feedback loops that are part of this whole thing. It’s not a great answer, but to be frank we were trying to do a lot of stuff. We didn’t do every idea we had, and I think that was one that just…Probably I just never got around to designing tee-shirts. That’s probably why it never happened.
Christian: Thanks, everybody.