David Dylan Thomas Hello and welcome everybody to Fight Bias with Content Strategy. We're gonna get started here, I'm gonna share my screen with you all. Alright and let's get started. So my name is David Dylan Thomas and this is Fight Bias with Content Strategy Using Mental Shortcuts for Good Instead of Evil. If you're interested in using them for evil we can have that discussion, but that's going to be a different talk. If you like to tweet during these things, our hashtag is #FightBias. And my name is David Dylan Thomas and I am a Content Strategy Advocate at Think Company. David Dylan Thomas So we do experience design and the podcast that I host that is kind of relevant for this talk is the Cognitive Bias Podcast and I want to start off by telling you a little bit about how I came to be hosting this podcast. So back in the day, I watched a talk by Iris Bohnet called Gender Equality by Design. And one of the things she brings up here is the idea that a lot of gender bias and racial bias just comes down to pattern recognition, as opposed to outright misogyny. So the example might be that you're hiring a web designer. And when I say the word web designer, like the image that might come, you know, unbidden to your head is, you know, skinny white dude. And it's not just, you know, because you think skinny white dudes make better programmers or anything like that. It's just that that's the pattern maybe that you've seen in television and movies in your offices that you've worked, right? And so if you see the name at the top of a resume that doesn't quite fit that pattern, you start to give it a little bit of the side-eye, right? And it's just an unconscious bias, but it's a reaction because it just doesn't fit the pattern. David Dylan Thomas And when I realized that so much of you know, this evil of racial bias and gender bias can come down to something as simple and dare I say human as pattern recognition, I decided I need to learn everything I possibly can about bias. And so I did. This is the page from the RationalWiki on biases, cognitive biases. There's easily 100 here. And I looked at this, and I realized, okay, I'm not going to figure this all out in one day. So I just did one a day. I looked at one bias a day and learned everything I could about it, and then moved on to the next one, and the next one. And this turned me into the guy who wouldn't shut up about bias. And so eventually my friends were like, Dave, please, please just get a podcast. And so that's what I did. David Dylan Thomas So it's worth asking the question, what is cognitive bias? And essentially, it's a series of shortcuts your mind takes, you know, just to get through the day, right? We have to make something like a trillion decisions a day, right? Even right now I'm making decisions about where to you know how fast to speak, where to sort of avert my gaze, you know, how to move my hands. And if I had to think carefully about every single one of those decisions, like I would never get anything done. Right. So it's a good thing actually, that our mind takes these shortcuts to get through the day. Occasionally though, those shortcuts lead to error. And we call those errors biases. Now, some of them are fun and harmless like this one. It's called illusion of control. And the way you can see it is, if you are playing a game that involves rolling a die. If you need a high number, you'll roll that dice really, really hard. Right? If you need a low number, you'll work really softly. Right? And it's not that, you know, you actually think that how hard you roll the die will affect the outcome, right? It's just that, you know, we like to think we have control even in situations where we have no control and we embody that desire for control and how, you know, we throw the die. David Dylan Thomas There's another one called survivorship bias, right? And this is what I think is very relevant for designers. And it's how we judge the outcome based on the winners and not the losers. And a good example of this is the story of Abraham Wald, who was a refugee from World War Two, was born in Austria, moved to America. He was Jewish and so he moved out of Austria when the Nazis moved and was fortunately able to make it out. When he got to America, he was a brilliant mathematician and he was asked to sort of assist in the war effort. And one of the questions he was there to answer was, hey, our planes keep getting shot down. We can add more armaments to sort of make them more resilient. Where do we add the armament? Right? And so he was in a room with the planes and with a bunch of different scientists, and most of the scientists said, Hey, see where al those bullet holes are? Just put more armament there. Obviously, he took one look at those same planes and said, No, you need to put the armament where the bullet holes aren't, right. Why? Because they were looking at the planes that made it back. They really needed to be worried about the planes that got shot up so bad that they went down, right. And those planes clearly got shot where the bullet holes aren't. Right. They were looking at the survivors. He was thinking about the ones who didn't make it. Right. And we see that in design a lot. I remember the first job I had in design, I was doing content strategy for a sporting goods store and their homepage was kind of the first big design deliverable, and we created something that I thought was beautiful. I'm like, Oh, the client's gonna love this. And then the client didn't love it. And we went through three or four more rounds before we finally got to the design that the client liked. And it's occurred to me, right, that so much of the web I see around me, right is not in fact, the first choice, right? It was the third or fourth choice that got through committee. In fact, the built world around us is not necessarily what was intended, right. It's what was able to be agreed on with the budgeting and the zoning and everything else. And it's usually like a fifth or sixth choice. So it's an illusion that is worthy, you know, to get unentangled from David Dylan Thomas Confirmation bias is a really important one and it's exactly what you think it is, right? So you get an idea in your head and you become so convinced that it's true that you only really look for evidence to support that idea. And if you ever see any evidence that contradicts the idea, you basically say fake news and you move on. One of the most prevalent instances of this was during the Iraq war. So the whole point of going into Iraq was that Saddam Hussein had weapons of mass destruction, right. And he was going to use them, so we had to go in. It was very convincing. Turns out not so much. And as early as 2004, okay, war begins in 2003. As early as 2004, the President of the United States, the one who was insisting there were weapons of mass destruction there was saying, Yeah, not so much. Even with that, cut to 2015 and 53% of Republicans still believe there were weapons of mass destruction there and 36% of Democrats. And to really bring the point home 51% of Fox News viewers, which would be a source that would be kind of confirming the bias, still believed, yeah, there were weapons of mass destruction in Iraq. We're going to come back to this bias. David Dylan Thomas Now, these are really, really hard to combat because you may not even know that you have a bias, right? In fact, there's a bias called the bias blind spot that basically goes like, you don't think you're biased but you're convinced everybody else is. Another factor here is that 95% of cognition happens below the conscious level. Now, I used to give this talk and I used to say 90%. But I've done more research because I'm actually turning this into a book. And yeah, it's actually 95% right, of cognition happening below you even being conscious of it. Right. So the next time somebody asks you why you did something, the most honest answer you can give is how the hell should I know? Um, and here's the thing, even if you do know that you have the bias, you're probably going to do it anyway. There's a bias called anchoring. And the way it works is I could ask everybody, you know, listening to this, to write down the last two numbers of their social security number. And then all those people could go bid on a bottle of wine. And those who wrote down a low number are going a bid lower. Those who wrote down a higher number are going to bid higher. It's anchoring. It's a thing. Now if I begin that experiment, and I actually tell you, hey, there's this thing called anchoring and you're going to bid a certain way, don't do that. They'll still do it. In fact, I could pay you cash money at the beginning and say I will give you money to not do this thing. You'd still do it. David Dylan Thomas So why do we as people who create things, right, designers, information, architects, content strategists, why do we care? Well it's this thing called choice architecture. And the best way to demonstrate that is think about going to the grocery store and buying produce, right? Most people know don't pick from the top because if you pick from the top, that's where the grocer is going to pick the oldest produce that they're trying to move. And they do that because you know, you're going to reach for the thing that's closest to you right. You're lazy. So they've architected it in a way to advantage themselves. Now they could just as easily architected to advantage the user, right, and put the freshest produce on top and make that the easiest thing to get. But the whole point is that they're manipulating how they know people make decisions with the architecture of that design. Now, think about the decisions your user needs to make. Now think about how people make decisions. Not how we think they make decisions. But how they actually make decisions, right? 95% on autopilot, right? That's where your user is most of the time. There is no such thing as a rational user. David Dylan Thomas The thing is, there are content and design choices we can make that can help keep harmful biases in check, or sometimes use them for good. And that's what we're going to talk about today. So let's talk about blind resumes. Let's go back to that skinny white dude, right? As it turns out, in study after study, if you have two identical resumes, and the only difference is the name at the top, if it's a male dominated field, and the name of the top doesn't read as male, that resume is less likely to move forward. Right? But here's the thing. Why do you need that information? Right? What difference does it make? How is knowing the name helping you, the hiring manager, decide who should move forward in the process right? Think of it as a signal versus noise problem, right? The signal is the qualifications, right? The experience, the expertise. The noise is the gender or the race, or whatever you're reading into the gender and race based on the name that you see at the top of the page. So the city of Philadelphia actually did an experiment around blind hiring for a web development position. And they learned two things pretty quickly. One is that if you actually want to bind a resume in the high tech world of web development, what you have to do is have an intern who has no stake in the hiring process physically print out the resume, get a marker and just redact it like a CIA document. The second thing they learned is that, okay, they found a resume with requirements that they like, experience that they like. And the next thing they'll do is they'll go to GitHub, which is a code repository, to basically see like the portfolio of that web developer. And what's the first thing that happens when they go to that web developer's profile on GitHub? All the personal informations there. The experiment is ruined. So clever people that they are, they created a plugin, a Chrome pluin, that would redact that information as it loaded. And then just to complete the circle, they put that plug in code and put it back on GitHub. So it's right there now, if you ever want to use it yourself. David Dylan Thomas Well, this isn't just about teaching humans how to be better hirers. This is about teaching AI to be better hirers. So not too long ago, Amazon had a hiring bot that was basically there to help hiring managers decide who to hire. And lo and behold, it kept favoring resumes that came from men, right, so much so that if it saw the name of a women's college, on the resume, it would demote it. And they were trying to figure out, how did this AI become so sexist? And they looked at the training data, right? If you ever have an AI, the only way it's going to make predictions is by the data that you give it. And they looked at the data they gave it. And It turned out to be the last maybe 10 years of resumes, which makes sense, right? What do you think all those resumes have in common, right? The vast majority were for men. So the AI took one look at that and said gee, you sure must like dudes and then just kept recommending dudes. So we need to be very careful in how we train these things. In fact, I'm very much in favor of lying to AI for situations like this, right? Where it's meant to make predictions that are kind of influenced the world we have. So if you want to less biased world, feed it less biased data, right? Don't point it at the world we've got which is already super, super biased. Point at the world you want, right? And then like you know, over index for, you know, women of color and other groups that aren't being represented in your current workforce. It's my own personal opinion about that. David Dylan Thomas Now we can also see the flip side of it, right. So Amazon Go is a store experience where you go in, you take what you want, and you leave. There's no cashier. Amazon is just automagically like deducting stuff from your account. And this had an unintended consequence. Ashley Clark Thompson wrote an article where she talks about the experience of shopping there. And as a woman of color, she was used to this thing called shopping while black. And what that looks like is sort of over aggressive customer service, right? Can I help you? No, can I help you? No, really, can I help you? Right? It's this customer service that eventually turns into microaggressions. There was no one there to do that when she was in Amazon Go and it was this very freeing experience for her. So by removing this one design element of the cashier, they actually made a less biased shopping experience for Ashley. David Dylan Thomas We're also seeing this play out in the judicial system. So as you may or may not know, the district attorney has a lot of leeway in deciding who to charge when a crime has been committed. They don't even have to charge anyone at all if they don't want to. And again, maybe not surprisingly, people of color are far more often charged than white people. So, in San Francisco, what they decided to try was, hey, maybe we don't need to know the gender or the race of the assailant of the of the victim of the perpetrator. Maybe we don't need that information, right. Maybe we just need to know what crime has been committed. In fact, let's not even reveal where it happens, because that could be biasing as well. So they've redacted things race and name right. We've redacted that information and then the location to see what would happen, to see if they get less biased charging. Still a little bit early to tell, they just started this last summer. By the way, if anyone who lives in San Francisco can give me a ring about this, I've been trying to find out information to see if there's any news about this. But I think this is a fascinating approach. David Dylan Thomas I want to talk for a bit about cognitive fluency. Right? And this is a very important term for designers to learn because it has a big impact on how we work. So the basic concept is that if something looks like it's easy to read, I will assume that whatever it's talking about is easy to do. By the same token, if something looks like it's going to be hard to read, I will assume whatever it's talking about is also going to be hard to do. Now I love pancakes. If I could somehow virtually make pancakes for all of you, I would. This is a recipe for pancakes. And the text is kind of small and kind of clumped together. And I might glance at that and before I read a single word, I might assume, you know what, I bet pancakes are really hard to make. Now, if I see a recipe with big pictures and little small chunks of text that are easily scannable, I might think to myself, you know what, I bet pancakes are pretty easy to make. Maybe I'll make pancakes. A two minute video? Forget it. We're having pancakes. David Dylan Thomas Now take a look at these two ways of finding out how to use public transportation to get from the suburbs to downtown Philadelphia. Right. The one on the left, I take one glance at that and I assume it is impossible to use public transportation to get downtown. I'm going to drive, right? I look at this app, that'll just let me input where I want to go and will give me you know, the times I need. And again, I haven't actually read the text yet, but I take a glanc at that and I'm like, I bet I could use public transportation. That's fine. I can totally ditch my car. Now, this is a video so we can't do this in real time. So I'll give you a moment to decide for yourself what you think the answer is. Okay, you're both wrong. Rosa Parks was born in 1913. But dollars to donuts you picked 1914. Most people do. And the reason is, if something is easier to read, we assume it's more true, right? But it gets worse. If something rhymes, we think it's more true. And this has consequences. So part of what's happening here is our minds like things that are easy to process because things that are easier to process give us more certainty. And if there's one thing I've learned by looking at bias after bias after bias is that our minds love certainty, right? We hate uncertainty and we love certainty. We will throw people under the bus, we'll throw logic under the bus to get to certainty. David Dylan Thomas Things that are easy to process are more certain. Right, so let's say I asked you, what did you get for your fifth birthday? Right? Was it a truck? You probably don't remember and you probably don't have a lot of confidence in answering me. Yeah, it was a truck or no, it wasn't like, I don't know if that happened. If I were to ask you, What did you have for breakfast? You'd be pretty confident in that answer. It happened pretty recently. It's very easy to process, right? Things that are easy to process just feel more true. And that's kind of what's happening here. Things that rhyme are easy to remember and are easy to process. Things that are easy to read and scan are easier to process and therefore seen more true. David Dylan Thomas Now this becomes really important when we think about things that people need to believe. As it happens, African American generally do not believe health information that comes from the government. In the survey in 2002, when asked the government usually tells the truth about major health issues like HIV AIDS, only 37% of African Americans agreed with that statement. By 2016, that number had dropped to 18%. And I don't have to tell you how important it is for people to believe true health information at this moment in time. But especially in this case, and granted, we could do a whole other webinar about how many reasons there are for African Americans to be suspicious of health information coming from the government. But at the end of the day, this is information that could save lives. So if it has to rhyme, if it has to be written in big bold fonts, right? If it has to use plain language, all those things should be on the table. David Dylan Thomas Now, as I said, I'm working on a book about this and my editor very wisely challenged me to say yeah, but can you actually point to instances where like plain language or easy to scan things, you know, saved lives. And as a matter of fact, I'm glad you gave me that challenge because I found some really interesting stuff. So pregnant smokers and ex smokers, who received a specially designed intervention with materials written at the third grade reading level, were more likely to achieve abstinence during pregnancy and six weeks postpartum than those who received standard materials. So in other words, people were smoking while they were pregnant. And when they were given third grade reading level instruction around this, they were much more likely to stop and to stay stopped even after their children were born. Right. So this is really important stuff. Similarly, a plain language pictogram based intervention, right, easy to scan, easy to read, used as part of a medication counseling resulted in decreased medication dosing errors and improved adherence among multi ethnic, low socioeconomic status caregivers whose children were treated at an urban pediatric emergency department, right? And improved adherence is a really big deal in medicine. Even if you can get the person the pills, the odds of them actually taking them and taking them the way they were supposed to are still pretty small. So this is a huge win. And again, coming from plain language, coming from pictogram based things that are easy to process. David Dylan Thomas That's great, you may say, but what about rhyming? Can rhyming actually save lives? Okay. So there was a campaign a while ago, say like, early 2000s I think called Click It or Ticket, right. And this was a campaign around, hey, if you don't wear your seatbelt, you're going to get a ticket. Right? And this kind of had two elements. One was the law, right, which did in fact improve outcomes for like older people. Like they wore their seatbelts more often. But younger people were still kind of on the fence. So Click It or Ticket was kind of immediate media campaign like phrase that kind of got rolled out along with it. And what they found was national adbeltult use among young men and women ages 16 to 24 moved from 65% to 72%, and 73% to 80%, respectively. And this is important because the other thing I found out is that the NHTSA estimates that for each percentage point increase in seatbelt use, 270 lives are saved, right. So if you do the math, it's something like almost 4000 lives got saved by the it Click It or Ticket campaign. So yes, it may seem silly, but it works. David Dylan Thomas And there are in fact, federal plain language guidelines, there was a Federal Plain Language Act of I believe 2010. And it basically said, Look, if you're a federal website and you're providing services, or a federally funded website and you're providing services, you'd better be talking about those services in plain language. And there's some great guidelines here. 18F is an organization that has fantastic plain language guidelines that the government uses. Highly recommend you check them out whether you work in government or not. David Dylan Thomas Another quirk of cognitive fluency and like, easy to process is how we translate visual information around price. So usually, when you have something on sale, you put the sale price next to the original price and the original price is kind of slashed out somehow. But here's the thing. If you have that sale price further away from the original price horizontally, it seems like a better deal. Even if it's the exact same numbers, right? And what's weird about this is it only works if it's horizontal distance, not vertical distance. But we equate horizontal distance with an actual price distance. David Dylan Thomas Another fluency thing is named pronunciation. So there's an actual name pronunciation effect, where the easier a name is to pronounce the better like that person will do. And easy to pronounce is culturally, you know, significant, right. So some culture's names they'll be easier to pronounce will be harder to pronounce in different cultures. But if your name and that culture is easy to pronounce, it'll do better. And that even extends to things like voting behavior, which should be really scary. My favorite example of this though is they did a study at the law firms and they saw that in law firms to higher up you went in the rank in a law firm, the easier the names got to pronounce. And this even extends to stocks right? So you can have a fairly new stock and if the name of the stock is easier to pronounce, it will perform better. And in fact, even if just that little abbreviation, the stock ticker abbreviation is easier to pronounce, the stock will do better. So we will put a hard money down on things that are easier to process. David Dylan Thomas Let's talk a bit about primacy. And again, some of this visual ranking we do. So there's a thing called primacy, where basically the earlier something is in a list of things, if I give you a list of things to remember, you're more likely to remember the things at the beginning than in the middle. And we tend to like give things that are higher on the page more authority and rank. And Amazon ran into this when they started rolling out user reviews and people would basically give a lot more authority to the first review on the page. But the thing is, it was only the first review on the page because it was the most recent one. They did most recent first, right? The same way you do with blogs. So people were attaching all the significance to a thing that really didn't deserve it. So they created this notion of helpfulness, right, and you can rate each review as being helpful or not. And the most helpful reviews would bubble up to the top, and that's a little better. But the thing is, if it was a positive review, people would assume Oh, this must be a good product. If it wasn't, they'd be like, oh, maybe this isn't such a good product. So they started putting the most positive, most helpful positive review right next to the most helpful critical review, giving them equal visual weight, which forced the user to give them equal intellectual weight, right, as arguments. David Dylan Thomas Another thing we ran into a lot is called the bandwagon effect. And it says probably what you think it is. A bunch of people do something you think, therefore, oh, it must be a good idea because all these other people are doing it. A really good example is an experiment where you have this figure here, right? And say, Here's exhibit one. All right, tell me whether or not A, B, or C, which of those lines is most like exhibit one? And if we were to do this one on one, you would take one look at that and say A, right? But if I put you in a crowd with 12 other people, and those 12 other people go first, and they all say B. When I get to you, you say B? Right? And when they ask people afterwards, they're say, Oh, I don't know. I figured the rest of the people in the room might know something I didn't know, or maybe I misinterpreted the question, right? But here's the thing about the bandwagon effect. If even one other person in the room says A, it gives you the confidence to say A. We just need one other confidant, right? Courage is contagious. David Dylan Thomas Now the way this plays out in design a lot of times is you might be doing a retrospective, or you might be doing a client meeting where someone has to talk about, you know, frankly, about what they feel about an idea or design. And if we're asking the question, hey, in this retrospective, what went well? And people just raise their hands and say what went well. If a whole bunch of people say, Oh, I thought this thing went well, but I don't think I went well, guess what? It's gonna be the same thing. I'll say B, right? I'll just say, Oh, yeah, that well well, or I won't say anything at all, right? Um, whereas if I have line of sight to my confidants, or to the to the people who disagree, right, it's way easier for me to express my opinion. And so rather than have everybody raise their hands, right, I'm gonna say, Okay, what do we think went well? Everybody write your answers down on a sticky, right, and then I'll collect all those stickies and map them on a board right, and then we can see okay, these people thought this went well, but these people thought that didn't go well. Right, and we get more honest answers that way. This is especially true if the highest paid person in the room is giving their opinions. like everybody's gonna agree with them if they're the ones to raise their hand, but if everybody's writing it down, it's a lot easier to get honest opinions. David Dylan Thomas Now, the most dangerous bias in the world, in my opinion, is the framing effect. And it starts out harmless enough. So let's say you go to a store and you see a sign that says beef 95% lean, right. And next to that there's a sign and it says beef 5% fat. Now, which beef are you going to get right? They're the same thing, but most people gravitate toward the 95% lean because it's been framed in a more positive way. And that's harmless enough when we're talking about beef. But what if I were to say, should we go to war in April or should we go to war in May? See what I did there? We're no longer talking about whether or not we should be going to war at all. And wars have been started over less. Now. If you are bilingual or if you speak more than one language, you actually have a secret weapon against the framing effect. If you think about the decision in your non native language, you're much less likely to fall for the trap. So I speak a little bit of French. So if I were to try to think about the beef decision in French, it might go something like I see beef, that's boeuf, that's a lot of vowels. 95% (...) maybe, right? And I'm going through all that in my head, and it slows down my thinking, and I can see right through the trap. And that is what a lot of this has to do with, is slowing down your thinking. The seminal text on cognitive biases, Thinking Fast and Slow by Daniel Kahneman. And if you read that you don't need any of this. David Dylan Thomas But one of the things he keeps coming back to is this notion of you have a system of thinking that just jumps to conclusions very, very quickly. And that's the one that can fall into error. We have another way of thinking that's much more methodological and slow, and when we're using that method of thinking you're much less likely to fall for these traps. So thinking in another language slows down your thinking, so you're less likely to make an error. Now it is possible to use the framing effect for good. So there's an experiment were I can show an audience this photo and say, should this person drive this car? And what we'll get is basically a policy discussion, right? And some people will say, oh, all people are bad at everything, don't let them drive. And you'll have other people who say, What are you talking about? That's ageist. Of course, let people do what they want, right? And all you'll learn by the end of that conversation is who's on what side. Now you can show the same photo to another audience and say, how might this person drive this car? And what you'll get is a design discussion, right? And people will say, oh, what if we change the shape of the steering wheel? Or what if we move the dashboard. And what you'll learn by the end of that conversation is several ways that person might be able to drive that car. I only changed two two words, right? But that frame change gives you a completely different conversation. In fact, if I were to say how might we do a better job of moving people around? Because that's why the person was in the car in the first place, because they were here and they wanted to be there. Well, if I frame it this way, things like public transportation are suddenly on the table. Now we can see the frame play out and things like evaluations of teachers, another area that is highly gendered, highly biased against women and people of color. And so in University of Iowa I believe this was, they tried an experiment where they give a couple short paragraphs before the survey that the students would fill out to evaluate their teacher that basically said just that. Hey, this is a field where these evaluations tend to be biased against women. So when you're making your evaluation, please just focus on the quality of the teaching, blah, blah, blah, blah, blah. David Dylan Thomas The people who got that, you know text, before they filled out the survey, rated the female teachers, people of color higher and their courses higher. Now, I want to close by talking about our own cognitive biases, because these are the ones that are actually the most dangerous for our users. First I want to talk about notational bias. And this is a bias where you kind of like, create things in the image of whatever's in your head. So this is sheet music. I had to learn sheet music. When I was a kid, I learned how to play saxophone. And I assumed that all music in the world could be expressed this way, right? As it turns out, not so much. There's all sorts of Asian music, African music that this is totally inadequate for. But if you make this the standard, if you make this the default, you're throwing away so much culture. Same thing happens when we're creating forms, right to collect personal information. If I have it in my head that there's only two genders, male and female, I'm going to create my forms that way. And if I create my forms that way, I'm erasing any number of identities, right. David Dylan Thomas By the way, while we're on the topic of collecting personal information, there's a thing called self serving bias. And the way it works is, if something works, it's my fault. If something fails, it's your fault. And usually, all things being equal, that's how we feel about computers, right? If I'm trying to make like an online purchase or something, and something goes wrong, I'll blame the computer and if something goes right, I'll blame me. Unless, right? Unless I give the computer a lot of personal information. The more personal information I give the computer, the more likely I am to blame myself if something goes wrong and blame the computer if something goes, right. And if you think about it, this kind of makes sense, right? Because we have very personal relationships with a computer. Who else knows personal information about us? Our friends. We're starting to treat the computer like our friends. So we need to be very careful when and how the context in which we're asking for personal information. We need to be very sure we need it in the first place. And we do need very careful about how much we ask, because every time we do, we are potentially engendering an unhealthy relationship between people and computers. So just keep that in mind. David Dylan Thomas Another way this manifests itself, and it always breaks my heart to read this, but until 1986, The New York Times prohibited the use of Ms. as an honorific for women. Right. And the way that worked was in the first mention of a woman's name, you would have her full name, and every subsequent mentioned it would say, Mrs. Last name or Miss Last Name, right? And the pattern this sets up, and remember how important patterns are, the pattern this sets up is that the most important thing to know about a woman is whether or not she's married. And then you know, maybe her name. Now, think about that. Year after year after year, an article after article, it sets up this terrible pattern. And if you structure your content in that way, you're engendering a stereotype. So we need to be very careful about how we structure our content. And to make sure that notational bias doesn't end up scaling some terrible bias for any part of society. David Dylan Thomas The reason we need to be careful, right, is that language doesn't just describe reality. It shapes it. Right. So this is Mbiyimoh Ghohomu, who works for IBM, said this once and it really stuck with me. And it's very true, right, not just in terms of, you know, metaphorically, but really legally. So when Dick Cheney became Vice President, he kind of wanted to you know, do some shenanigans and his lawyers told him Okay, look, you are Vice President which means obviously you are a member of the Executive Branch, right. But you also cast the tie breaking votes in the Senate, which means you're also part of the Legislative Branch, right? But you can't be both a part of the Executive Branch and the Legislative Branch. So maybe you're not a part of either. Right? And if you're not a part of either, well, then you don't have to follow the rules of either. And he didn't. The same thing is happening today with let's say, Facebook, right? When it suits them Facebook says, Hey, we're a publisher (...) the New York Times, we know what we're doing. But when someone points out that there are rules for publishers? They're whoa, whoa, whoa, I don't know what you heard. But we're not a publisher. We're a platform. Right? So they play that same game because words matter. David Dylan Thomas Now, there are tools out there to help you think about representation, how to use your words more carefully. One of them is Radical Copyeditor, and they're very good at giving you tips about how to write about people who don't usually get a lot of say in how they're written about. To do it more respectfully, with more agency. Another good tool for this is textio and they do a lot of things, but in particular, they help you write more inclusive job descriptions because not everybody wants to be a rock star or a ninja. So those are some tools to look at. David Dylan Thomas Another thing I really think we should think about more is evidentiality, right. And this is something that other languages have but English doesn't. So Turkish, for example has this. And it's this idea that we in English don't use our verb tenses very heavily, right? They don't have to have a lot of heavy lifting. They really just have to tell you when something happens. So Bob went to the store. Bob is going to the store. Bob will go to the store, right? Other languages make their verb tenses do a little more heavy lifting. And the verb tense will tell you not just what happened, but how I know that it happened. Right? So there's one tense for, I saw Bob go to the store myself. There's another tense for, somebody told me Bob went to the store. And there's some other tense where, I read it on the internet, right? But at the end of the day, I can't tell you that Bob went to the store without also telling you how I know. Now, think about why in this day and age, we want people to be accountable for their sources. But we don't have this in English. But I think there's ways we could introduce it. So remember we were talking about how if something's easier to read it seems more believable? By the same token if something's harder to read, it seems less believable. So this is an actual article, right, about the new 007 movie and the trailer that dropped for it. And the first paragraph is verifiable facts, right? This is the name of the movie, this is who's in it. The second paragraph is literally a rumor about why the first director got fired. And it's harder to read. And it should be harder to read because it's less, and be less believable because it's less believable, right? It's a rumor, it shouldn't be as easy to believe. Now normally, this article both paragraphs are written with the same level of readability. But what if they weren't? Right? So this is something I think we can start to play with to introduce evidentiality kind of back into the English, or into the English language. David Dylan Thomas Told you we'd come back to this one. So I want to tell you a little bit about how I completely misinterpreted the scientific method. So I thought that the way it works is I have an idea about how the world works. And I test that. And if I get a good result, I ask you to test it, and you, and you, and we all get the same results. Great. New theory, new law, let's move on. The way it actually works, after talking to some real scientists, is more like this. I have an idea about how the world works. And I try something out. And if I'm right, you try it. You try it, you try it and you're all right, great. I get to spend the rest of forever trying to prove myself wrong. I have to ask myself, if I'm wrong, what else might be true, and then try and go prove that which is much more rigorous. And that's why the scientific method was invented. To fight confirmation bias. And as designers, it can be really easy for us to leave good design on the floor because we had an idea that we thought was better. David Dylan Thomas So I can show you how easy it is for that to happen. So let's say we're going to play a game, a computer game. The computer says 2, 4, 6 ? Put whatever number you want where that question mark is, the computer will tell you whether or not it fits the pattern. Put as many numbers in as you like. And when you're ready to guess, guess the pattern, right? If you're like me, you guess 8. And the computer says, congratulations, that fits the pattern. Would you like to try another number? And if you're like me, you say, nope, I got this, hold my beer, and the pattern is even numbers. And the computer says, No, it's not. And it says no, because I didn't try this. The pattern is not even numbers. The pattern is every number is higher than the number that came before it. Which is a much more elegant pattern, a much more elegant solution. Much easier to code, right? But I never found that solution because I was so in love with my even numbers idea. Right? And as designers, we can do this all the time. If we come up with a design that we like, or God forbid that our client likes, right? We're stuck with it, right? Because it's easy for us to miss these better solutions. David Dylan Thomas Now, there are methods that can help us fight that bias. One of them is called Red Team, Blue team. And it's a tool used by both the military and journalists. Let's think about how often that happens. But the basic idea is you have a blue team. And they're kind of like the first team and they go in, they do the research, they start to come up with the designs. Maybe get as far as wireframes. But they get the design into a shape where it can be expressed. But then the red team comes in for one day. And the red team's job is to go to war with the blue team, right? And they're there to find every missed opportunity, every little thing that could actually cause harm that the blue team never found because they were so in love with their first idea. Now, what I like about this approach is that it's cost effective, right? I don't have to tell my COO. Hey, going forward, we need to spin up two teams for every single project and they got to check each other's work every day. No, I need one team for one day to try to keep something harmful from being put out into the world. David Dylan Thomas Another great tool for this is speculative design. And it's basically thinking about the worst case scenario in a story kind of way. So, if you've ever seen Black Mirror, it's basically Twilight Zone but for tech. So you have some near future tech and a short story, you know, episode about what would happen if real human beings got their hands on that tech. And it's usually terrible. I feel that anybody working on a new technology by law should have to write a Black Mirror episode about it. But this is a real job, right? So there's a group called Superflux that went out to the United Arab Emirates. And they were asked to help them think through energy policy, right? Should we continue with fossil fuels? Should we try renewables? And one of the things Superflux did was they said, Okay, let's think about what is your air quality going to be like in the future, right? In 2028, and 2034, but they didn't just figure it out. They actually bottled it. And then they made them breathe it, right? And by the time we get to 2028, it's unbreathable, right? And causation or correlation, but by the end of that summit, the UAE announced, we are going to put I think it's 150 billion with a B dollars into renewables. David Dylan Thomas The last bias I want to talk about is called Deformation Professionnelle. I told you I spoke French. And it's basically the idea that you view the world through the lens of your job, which in the workaholic society we live in may seem like a good idea. Until it's not. So the paparrazi who ran Princess Di off the road, probably thought they were doing a great job. And technically speaking they were, right? They were getting really difficult-to-get photographs that were going to fetch them a really high price. But what they were doing a bad job of was being human beings. Now when the police commissioner, former police commissioner of Philadelphia got the job, he asked his officers, what do you think your job is? And many of them would say, oh, to enforce the law, right? And he would say that's a reasonable enough answer. But what if I were to tell you your job is to protect civil rights. Now that encompasses enforcing the law, but it's a bigger mandate. And it means that you have to give people some dignity. In fact, it gives you a mandate, it gives you permission to treat people with dignity. And at the end of the day, it means that their jobs were harder than they thought. And I would submit that our jobs are harder than we think. Right? It's not just design cool shit, right? No. We need to to hold ourselves to a higher standard. We need to figure out you know, how do we define our jobs in a way that lets us be more human to each other. David Dylan Thomas Now there are folks working on this already. Mike Monteiro over at Mule Design has created this little booklet of design ethics. He has a great book called Ruined by Design that gets into this. And it's basically a Hippocratic Oath. First, do no harm for design. And there's the Markkula Center for Applied Ethics, which has fantastic ethical frameworks to help you think through decisions in your design, right? And you can take all these different approaches. Which option will produce the most good and do the least harm? Or which option treats people equally and proportionally right? These are things you can think about in your design. There's the Tarot Cards of Tech, which is fantastic. It's a website you can go to. And they basically have these little card interactions and you can read really instructive questions like, How might cultural habits change how your product is used? And how might your product change cultural habits? Now think what would have happened if Twitter asked these questions before they launched. We'd have a very different world today. Another really good interaction around this is 52 UX Cards to Discover Cognitive Biases. And what I like about this one is if you go to the website, they not only have the cards you can print out but they have some great exercises you can run through to use them. A great, almost a design system for ethical design, is called Humane by Design. And it's a gorgeous website with these really useful practical tips for approaching more ethical, bias-informed design. David Dylan Thomas We're seeing this play out in software too. So there's the Never Again Pledge, which around the time when data engineers were being asked to do some really unethical stuff, they got together and they created this pledge, right? And basically said, here are the things we pledged not to do around data. And you're starting to see people get together and organize and do collective action around these things, just short of actually forming unions, which is something we should definitely discuss. But back when Google tried to create Project Maven, which was a battlefield AI, you had a whole bunch of Google engineers who got together and said, Look, we didn't get into this business to make weapons. So we're gonna walk, right? We, we're, you know, we're not going to do this. And Google cracked. They said, they walked away from a 250 million dollar contract. They turned right around and did Dragonfly, which was about censored search in China, which we're still you know, this has not been resolved yet. But a similar battle is playing out where the engineers are saying, Hey, we're going to walk if you keep doing this. By the way, who comes up with these names, like these like James Bond villain names. Project Maven and Dragonfly. Like someone actually, some adult human put that in an email. David Dylan Thomas We must rapidly begin to shift from a thing-oriented society to a person-oriented society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered. Now this is not some software guru at a TED talk, right? This is Martin Luther King, Jr. He saw this 50 years ago. And it's only more true today. So the challenge I would give all of us is to decide on a way to define our jobs that lets us be more human to each other. Thank you. David Dylan Thomas If you ever want to catch up after this talk, you can find me at that link to have a virtual coffee. And like I said, I've got a book coming out called Design for Cognitive Bias this summer from A Book Apart. And you can get on that mailing list at that link there. Again, thank you so much for your time and enjoy the rest of the conference. Transcribed by https://otter.ai