Ashley Brewer Hi, everybody. Thanks for coming to my talk, Spelling UX D-E-I Strategies and Ideas for Centering your UX Practice around Diversity, Equity, and Inclusion. I wish I could be there with you all in person. But really appreciate all the work the organizers have done of this conference to move it to a digital conference, and I'm looking forward to having conversation with you on Slack a little bit later. Ashley Brewer So my name is Ashley Brewer. I am the Web Systems Librarian at Virginia Commonwealth University Libraries in Richmond, Virginia. My pronouns are she/her/hers. And I'm here to talk about how we can make our designs, IA, and UX work speak to a diverse range of lived experiences, promote equitable access and justice and make folks feel included and welcomed. This is a short 20 minute talk so it is not at all meant to be exhaustive or even comprehensive. Also as a white cisgender woman by giving this talk, I do not intend to paint myself as any kind of authority. My hope is to spark conversation, and maybe inspire some different ways of thinking about our work as a force for good. By bringing up a few ideas and case studies I found helpful or illustrative. And because I think we all need to be implicated in the power of our work as technologists, and using that power to make folks' lives better, at the very least to take a sort of technologists Hippocratic oath, though I think we can do better than merely doing no harm. Maybe, you know, even dismantle some white supremacy and patriarchy along the way. This talk is going to assume that you care about those things. And that you care about your work outside of capitalism and really profit driven motives. So, all that said, let's talk about it. Ashley Brewer So first, I want to start by talking about why I think being not only conscious of diversity, equity and inclusion in our work as designers and UXers is not merely important but necessary. And I feel like that starts with talking about the myth of neutrality. So I don't know how many people need to hear this today, but technology is not neutral. There's nothing neutral about technology. I think this quote from Safiya Noble sums it up quite well. Many people think that technology is neutral, that it's objective, that it's just a tool and has no values. That's the dominant narrative around technology and engineering in particular. But computer language is in fact, a language. And as we know, in the humanities, languages, objective language can be interpreted in a myriad of ways. Ashley Brewer So like I said, I think that sums it up really well, but to add just a couple of my own thoughts, so it should perhaps seem obvious that things made by humans can't help but inherent some of the air in creators and perfections and yet many folks talk about technology, be it a search engine as Noble explores in Algorithms of Oppression or data applications used in financial services. And housing industry and criminal justice as Cathy O'Neill talks about in Weapons of Math Destruction. Many people talk about these kinds of technology as though there's some kind of alchemy at work in purifying away our own biases and assumptions through the process of translating idea through code, to creation of technology. But of course, like alchemy, this isn't a real thing. We can't code away our own lived experiences. The very code itself was made by humans with their own particular worldview. Ashley Brewer Coming from the world of libraries, I encountered this neutrality myth not just in my world, as a UX designer with technology. But libraries also have self perpetuating this about themselves as neutral spaces and neutral defenders of free speech, which has not only resulted in fighting against book banning, which is rightfully celebrated, but has also resulted in libraries using their spaces or allowing their spaces to be used. And that's amplifying speakers with hateful agendas like TERFS and white supremacists. So in addition to questioning the very possibility of neutral tech, I'm also questioning whether neutrality is even good or desirable. To quote a fictional Alexander Hamilton, if you stand for nothing, what will you fall for? So, sometimes, okay, maybe we make some erroneous assumptions about the uses of our systems and designs or it's based on some flawed data. Big deal right? Well, if it were merely annoying, I would still challenge us all to do better. But in many cases, the biases or in some cases, explicit needs of commercial business can cause very real harm. I don't have time in this talk to go in depth of each of these cases. But if you're not familiar with the above cases, I would encourage you to do some research of your own into them. These are just the tip of the iceberg examples. There's many many more. But Algorithms of Oppression as I've previously noted, explores algorithmic bias, particularly in Google inspired by searches between 2009 and 2015 with racist and sexist results, such as sexually explicit content on the front page of a search for "black girls" and images of almost exclusively white men under a search for "professor style". There's just two examples. And other case studies of widely reported issues of racial bias in risk assessment software used by judges to determine sentencing. And criticisms of value added modeling used in assessment of public school teachers, which were in fact ruled unfair in a federal lawsuit by teachers in Houston, but had previously been used to determine bonuses and evaluate teacher effectiveness. Of course, so much more. Ashley Brewer Okay, so a couple of examples of what the consequences of systemic bias in our designs, the real harm that they can cause and some of the consequences. So how do we limit baking bias into our systems and designs in the first place? Well one potentially mitigating factor is doing everything you can to make sure you've got a variety of lived experiences in the decision making room. If everyone on the design team looks like you or has a similar background, what can you do to advocate for greater diversity on the team? Is everyone allowed to speak? Does everyone feel empowered to speak? If you have no hiring authority, can you advocate for diversity as a value in your organization? And that these values be codified and meaningful steps taken to recruit a variety of folks? Will you commit to speaking up for underrepresented folks and supporting them? Will you commit to shutting up and listening to them? And in addition to who's in the design and decision making room, who's in the room when you're doing user testing and UX research? Are you going to the same places to recruit folks? Are you recruiting the same kinds of people because of who you believe your audience is? What would happen if you challenge that narrative that only certain types of people with certain experiences can contribute to your design or product, both as a user and as a designer? So I think we should all be diversity champions. We should all work to make sure our working environments are welcoming and inclusive toward diverse perspectives. We should help amplify the underrepresented voices in our teams and workplaces. Listen to what they have to say, and hold ourselves accountable to do that work as well. We need to listen to our underrepresented colleagues. We also can't expect them to do all of the lifting of making our workplaces and products more inclusive and just. It's important not to take our privileges for granted. Ashley Brewer So another item I want to critically interrogate, back to the harm reduction. Point is, what data we collect from folks and how we ask for it. There are plenty of legitimate reasons to ask for demographic data, but there are also less legitimate ones. I think we collect and ask for information without really thinking about it a fair amount, because it seems standard. But once again, the data we collect from folks or ask them for, how we have them interact with our forum, and how we present information about them to them can have very real consequences. A great or rather bad example of this can be seen in deadnaming. So deadnaming if you're not familiar is when you use, or your system or app insists on using, the quote unquote legal name or parent given name of a person rather than their name of use. So this can have a particularly deleterious effect for trans folks, both in terms of making them feel othered and not welcome in your system to be confronted with their deadname when they log into your app or your system. But on the flip side it can also contribute to outing folks if you allow them to change their names in your system without allowing them full control over who else sees their information. University identity management systems are particularly difficult and fraught for trans folks, especially at name changes and name of use, with the fear or possibility that something with a name of use might get sent to parents, for example. And if they aren't out to their parents yet, that could be a potentially dangerous thing for a lot of folks. Additionally, binaries and other categories persist in our systems and design hierarchies. Again, my background in libraries helps me understand this as well as bias and prejudice, have long existed in the Library of Congress subject headings, and were famously called out way back in 1971, by Sanford Berman in prejudices and antipathies attract on the LC subject heads concerning people. So classification, I get, you know, is so integral to our sense making and certainly to creating navigable systems. But what assumptions are we making to create our taxonomies? And are they harmful? Ashley Brewer So a company that I really like, and I think is doing some really cool stuff and is a great example of showing how you can exist without the gender binary as one example of a binary is Big Bud Press. They don't distinguish as you can see in their menu, which is up top here, between men and women in clothing categories, but instead focus on comprehensive measurements and representation. As you can see, throughout their site, different types of models, colors, shapes, sizes, and even best in addition to a traditional size chart chart with garment measurements, they have a model size chart that shows a variety of femme and masculine bodies of different shapes and colors, modeling their clothes with the model's measurements and sizes that they take in certain popular items. And, you know, beyond just a great representation that this shows, I mean, obviously, it's way more useful to have a sizing chart that is more than just numbers, but gives you some context to see it on a real body. So really great, really great work there, I think. So what can we do? Again, this is not at all exhaustive, and also heavily borrowed from the excellent article on A List Apart, "Trans Inclusive Design" by Erin White. Ashley Brewer But a few examples of how we can build affirming systems and designs are to embrace the singular they, avoid honorifics in forms, or if you must, embrace Mx as an option. Give users control over what information they offer and how it's presented. Question your categories. What categories, what differentials are actually necessary for what you're trying to accomplish or sell, or have users be able to do in your system or app? And in images make sure you're showing a variety of bodies. This Vice resource is a great resource of trans and gender nonconforming stock photos. Ashley Brewer So some things are best practices. And some things are legally mandated. Certain types of non discrimination and in the case of web development, making sure interfaces and products can be used by folks with a variety of abilities. What I'm arguing in this talk is not how you meet minimum thresholds of lip service to diversity or compliance with the latest accessibility standards. I'm arguing for us to strive to make systems and designs that actively affirm and welcome folks. As an example with accessibility. So accessibility is of course making your site or interface usable by folks with different abilities. Making sure it checks off the latest accessibility standards. But a more inclusive approach would be making differently abled folks feel seen by your design or product, or optimizing their experience somehow so that it's more usable. For example, putting as many thoughtful, imaginative, and useful elements in your design for someone who is, say, visually impaired, as you do for someone who is sighted. Add the same kind of whimsy or something extra for someone who is visually impaired as you would for someone who isn't. Ashley Brewer So it has already been kind of a lot. It's a big topic again, and I'm not even trying to be anywhere near exhaustive. But you know, what would I say is kind of a guiding principle of all of this? And I've don't pretend that what I have sort of come up with is the best way to phrase it. I look forward to discussing kind of other ways of sort of framing this or kind of building guiding principles around this sort of work. But what I've come up with that has helped me is the idea of radical empathetic critical human recalibration. So what we want to try and do is limit technology's ability to perpetuate biases and injustice. This can be done by working to limit it from the get go, as I've talked about, pardon me, as I've talked about previously on this talk, but we're not going to be perfect. So how do we continue to hold ourselves accountable and give ourselves opportunities to improve before we've done harm, or to mitigate any harm we have done as quickly as possible? And I think one way to do this is by challenging the stories, algorithms and market research encourage us to make about ourselves and about each other. Our work as designers and technologists, making things, is inherently based on a mix of research, hopefully, and assumptions, assumptions based on the research directly, and often on inferences based on our previous assumptions. We have to be careful how much we build on each new assumption, and how many layers of assumption we allow to build on previous assumptions before we test and recalibrate. Ashley Brewer This is where letting algorithms and AI go wild and run everything can get so thorny. If we only let something learn from itself, whatever we put into it only gets magnified. This is the case for us, which is why it's important to read widely, experience widely, expand our range of experiences and who were who were interacting with. But it's definitely the case with something like in AI. Whatever we build, we need to consistently assess, re-examine and test, but also critically interrogate. We have to re-ask our whys and hows and maybe even find new questions to ask. Ashley Brewer And I think that our work as in IA and UX is uniquely positioned. I think we're uniquely positioned to help advance the work of equity, justice, and belonging in design. Since so much of what we do is about asking questions, cultivating empathy with our users, understanding context and trying to see where they're coming from. If we approach this work sincerely, with minimal noise from outside expectations, then we're already on a good trajectory. But think about some of the skills we use in user interviews, for example. Asking questions to get more information that we can act on from our users. Like, what makes you say that? Can you tell me more about that? Let's explore that. Imagine turning these questions inward on ourselves and on our colleagues, on our organizations, to uncover and interrupt bias. Ashley Brewer So that's it for my talk. Thank you so much for listening. As I said, I wish I could be there all with you all in person, but I'm really looking forward to discussing some of this, hearing your own experiences, and your own thoughts around centering diversity, equity, inclusion, and belonging in our design on Slack a little bit later. And I hope you all and yours are staying safe and healthy. And here's a quick shout out to some of the texts and presentations that I explicitly cited and used in this talk. Alright, thanks so much. Transcribed by https://otter.ai