2014 Main Conference Talk
This talk explores the evolving relationship of content, taxonomy, wayfinding, and context in order to propose a practical strategy for designing information systems across digital touchpoints. Users increasingly expect multi-device and multi-session consistency when they engage with a digital product. At the same time, delivering a consistent experience grows increasingly complex as services and touchpoints diversify and add capabilities.
In this talk, I will examine the responsive and adaptive approaches used in front end development and map the core insights of responsiveness and adaptability to key tasks in information architecture. I will show how responsive information architectures offer a solution to the challenges presented by contextually complicated information spaces by creating robust content driven systems designed to be articulated across contexts.
By situating these principles in real world examples, I will provide attendees with both the practical insight to apply these strategies to their own projects and the vocabulary and understanding needed to get buy-in from teams and stakeholders.
About the speaker(s)
Andy Fitzgerald is an independent digital experience designer with applied expertise in design research, information architecture, interaction design, and prototyping. He works with organizations of all sizes to create elegant solutions to complex information problems.
Prior to forming his own practice, Andy held design and director positions with Frog Design and Deloitte Digital where he tackled the problem of effective communication in complex information spaces for a wide range of client organizations in healthcare, education, financial services, retail, entertainment, and transportation. Andy is an active member of the IA and experience design communities and has spoken and led workshops at UX and IA Conferences all over the world.
Andy Fitzgerald: Thank you everybody for coming. It’s great to see you all.
I want to start here. I want to start with the page, and the practice of writing that comes with the page.
Together these two things revolutionary technologies have allow us, as a species, to accomplish amazing things and terrible things, but also amazing things.
With writing combine, some have argued, successfully in my book, that they have changed the way our brains work. They’ve changed that way we think and potential in technologies, to deeply effect how we exist in the world is exciting stuff.
Like other technologies, however, the page comes with both opportunities and constraints. Some of the constraints of the page, fairly obvious, it’s fixed. Once it’s printed, you can’t do much with it.
Not a lot of diversity in pages. We get color. We get size, but there’s still pretty page like. We interact with all of them in more or less the same way. They don’t often interact back much. They become airplanes or wads. Now, for pages, this is fine.
It becomes a problem when we map these constraints, onto other kinds of technologies. There’s been lots of talk already today at the summit, about mobile and what that means for IA.
Now, we know that these devices are coming onto the market in greater numbers and variety every day, and we know that they consistently trump the page like assumptions and constraints, that we bring to them.
We also know that mobile is only one piece of the information ecology, we design for at present. Scott Jensen uses a metaphor to describe this larger device landscape, and he talks about, “This ecology is composed of bears, bats and bees.” The bears are the devices that we have on our desks and our pockets that have, rich displays. They’re powerful, they’re versatile, they have lots of capability.
The bats are the smart objects that focus on a single task, have more limited display capabilities. The bees, finally, are the inexpensive, ubiquitous sensors that send data back from the wider network.
All of these create information flows that need to be designed, to be intelligible for our end users. Now, I like this image of this futuristic, bucolic technosphere, but when I think about bears, bats and bees in the connected environment, or bears, bats and bees in the connected home, in my living room?
I usually think of something closer to this.
Andy: The situation is that it’s not so much an Internet thing, as a zoological horror show. It’s one that as IAs, at least in part, it’s up to us to deal with, how to tame this hoard of information.
The response from the Design Community on how to avoid this domestic catastrophe, has been pretty clear.
Sara Wachter-Boettcher puts it like this. “The best we can all do is focus our limited stock of human care and attention, toward designing systems, not obsessing over individual pages and platforms.”
This question of designing systems has taken a lot of forms. When we’re designing for the desktop, we pretty quickly figured out, especially with mobile, that that’s bad.
Then we started doing mobile first. Then we started figuring out how fluid and slippery content was, and we started moving into content first, which is where we still are now is that “content first” place.
What this looks like is still taking shape. For today, my purpose, I’d like to explore what content first can look like from a practical IA perspective.
As some of you know, I’ve written recently on how we deal with this kind of ecosystem. I’ve written at a very conceptual level, thinking in the big broad terms we like to think of here.
My presentation today will deal with some of those broad level things, but also deal with some very concrete level things. The concrete level things I would like to submit to you not as solutions, but as thinking exercises.
Places we can poke holes in our assumptions and, hopefully, ways that we can start to reframe the questions that we bring to this space. It’s with this goal in mind that I’d like to present to you with the reading, of one way we might design IAs for system, a responsive information architecture.
In a nutshell, I define this as an information design strategy that allows for the expression of specific meaning, across the multiple and independent contexts. The way I’ve been practicing this recently in the agency that I work for, is made up of four key pieces.
The first is a rich understanding of the information ecology, a content-driven set of guidelines for interaction design choices, articulated information structures based on multiple modes of meaning making.
Articulation is a word that I’m going to be using a lot today. By it, I mean, expression of our ideas, fluently and coherently.
Then, finally, the last piece here is an embrace of ambiguity as a strategy for negotiating the connected environment. I’m going to go through these four ideas and four basic steps.
I, first, want to look at responsive web design. My interest there is in order to pull out the kinds of insights that responsive web design, who has hit this space head-on, the insights that they’ve brought to that, and see how we can apply that to IA.
I’ll also look at meaning-making, and meaning-making as an embodied ecological phenomenon. We’ll look at information architecture, as we’ve practiced it up until now.
Then, I’ll give an example of what a responsive information architecture, as an emerging practice for negotiating cross-channel experiences in the connected environment, might look like. Let’s start here, with responsive web design.
Ethan Marcotte, who literally wrote the book, “Responsive Web Design,” in 2011, writes that, “The goal of responsive web design is to embrace the flexibility inherent to the web, without surrendering the control we require, as designers.
“Like the constraints on information imposed by the page, many of the constraints from the web are constraints that we’ve imposed ourselves.” Table layouts, spacer GIFs, flash, M-Dot sites. None of this is inherent in the web.
One of Marcotte’s key recommendations for designing for responsive web is, the responsive grid. The responsive grid’s pretty easy, in general. You break your content into chunks, and then assign them a certain width on the page within rows.
As your screen size changes, you can reorganize those widths and then stack them vertically. I’m breezing over this, because I’m assuming that the 99 of you are like, “Yeah, Andy, we got it.”
Every time you open a responsive web site on a phone, you’ll see this happening. We can take skinnyties.com, for example. We have our large bar there, across the top, and then our three content chunks.
We look at that on a phone, and we can see that everything’s taking up the full width of the page. What’s interesting is when we start to move up the levels of abstraction, of how responsive web design works and the kind of models that are behind it.
If we look at the underlying structure of that responsive grid, we see that it’s made up of 12 columns. The code in the website tells that first chunk, “Hey, on a desktop, you’re going to be 12 columns wide.” Tells the second chunk, “You’re going to be six columns wide.”
The third and fourth reach three columns. On mobile view, the code says, “Hey, we’re on a smaller screen. Everybody’s 12 columns wide.” So they stack.
When we abstract this up another level, we can see that in designing the grid, the developer has to conceptually map out all the possible columns, and all the possible column combinations.
You would never see this many columns on an actual website, but this is the underlying structure that makes the gridded display we saw earlier, possible.
If we abstract up one more level, we can see that what we view in the end as an orderly, tidy grid of boxes is an exhaustive set of instructions built into the website’s CSS. It’s these declarations that create the possibility of a structure that we can easily recognize.
By moving this high up into the design process, conceptually, we can see how responsive web design acts as a shim between the technical browser requirements, and the perceptual requirements, of the human user.
That’s the thing that I want to pull out of this technique. That’s the thing that’s interesting for me, that the responsive grid is a model that allows us to write the rules necessary, to effectively articulate an instance in the system.
The only time you see the columns and the grid like I show them here is, when someone’s trying to explain the model. The model’s for us, for the designers, not for the users.
What the users see is content that makes sense, that’s been tuned to their perception. The next question is, “How does this concept of models, rules, and articulation work when we’re talking about how people process information?” To dig into that, let’s look more closely at how people make meaning.
In the brilliant and often-quoted “Design Quarterly” essay “Hats,” Richard Saul Wurman writes that “you can only understand something relative to something you already understand.”
Like much of what Wurman says, simple statement, profound consequences.
Linguist and semiotician Charles Sanders Peirce, identifies three ways that people signify or make meaning, from symbols.
He calls these modes of signification. The first of these is perhaps the most straightforward to symbolic. In this case, the signifier does not resemble the signified. It’s arbitrary and conventional.
The classic example here is the thing tree and the word tree. The word tree is our signifier, the thing tree is our signified. We could as easily call this a crocodile, and as long as we all agreed that this was a crocodile, we could use it exactly the same way.
Text, in this kind of text, this arbitrary signification, is still what much of the web is made up of.
HTML is literally a language for marking up a certain kind of text. Text still remains a large part of how we understand the world as well. But two major factors are disrupting this.
One is mobile, which is variable in context. The other is the connected environment, which goes beyond textuality and moves into embodiment.
Both of these are intertwined, and they both use methods of meaning making to go well beyond the text.
While this kind of signification will continue to be important in the way we understand the world and the way we move forward, I argue that we need to look at other way that we make meaning, if we’re going to adequately design for this environment. Let’s look at a couple more.
The indexical mode. Definition here is a case where the signifier is directly connected to the signified. Classic examples, smoke signifies fire. Fever signifies infection. A knock signifies a visitor. Handwriting signifies the writer.
There have recently been some interesting examples that have cropped up in the Internet of things, of the indexical mode.
This is a good night lamp. The good night lamp is an Internet-connected lamp that has a very simple purpose. It allows people separated by distance, or circumstance to remain connected.
The way it works is, the big lamp, it’s shaped like a house of course; it’s got a button in the chimney. You turn the button on, the lamp goes on.
The person with whom you’re trying to remain more connected — the separated love one if you will—has one of these little houses wherever it is that they are. You turn your lamp on, their lamp goes on.
The scenario is you come home, you turn your lamp on, you go about whatever it is that you do when you’re home, and then when you go to bed, you turn the lamp back off.
It’s a simple idea, but there’s something clever going on here. And the something clever works on the indexical mode. It’s that when the lights are on in the house, somebody’s home.
Now, it seems simple, almost stupid, but if you think about people who are away from home or separated from loved ones, then you get all the things that are wrapped up not in the word home.
The concept home and that this uses its shape, form, and those kinds of connections to signify at a much deeper level, at an emotional level.
We can see why this is the case, when we compare it to other ways you could do this. For instance, you can send a text message “Hey, I’m home,” text again “Hey I’m going to bed.” Nowhere near the same kind of connection.
You could also get closer to something like this, and have a little switch that turned on a blinking LED that blinked for the whole time you were home, and it blinked in your loved one’s house, too, until you went to bed and then it went off. They would probably grow to hate you.
My point here is that this mode of signification moves the connection, between the signifier, and the signified deeper, into our embodies into our emotional experience.
Our last mode goes even deeper still. This is the iconic mode. In this case, the signifier is seen as resembling or imitating the signified. Semiotician Daniel Chandler writes that “iconic signifiers seem to present reality more directly than symbolic signs.”
I’m going to break my example of showing a positive example here, and show you a negative example first. Because when we think icons in this practice, we usually think about something like this. This is the icon, of course, for Save.
The notorious icon for Save, because nobody saves on these anymore and haven’t for years. Many people who would press this icon have not even seen this physical thinking.
We can even take something that’s a little less notorious, these icons for instance. There’s still no direct connection to the signified. At one time, the reel-to-reel tape machine, there might have been something indexical here.
This is still operating on the symbolic mode. There’s an arbitrary connection between the thing, and the thing that it’s standing for.
Iconic signification in the linguistic semiotic sense, operates at a much more fundamental level of meaning-making.
This image, for instance, signifies to us much more clearly and much more directly, than any of the preceding examples. It’s an outlet. It signifies a concept, a face, and an emotion, happiness, to us pretty directly.
It’s because it resonates with the way we perceive the world. It has the salient characteristics of a face.
In contrast, compare this with the kinds of outlets that I have in my garden. We still see a face, but it’s pretty appalled.
Andy: Every time I open the little door now, to go plug something in outside, I’m confronted with this quandary.
Now, some of this is cultural and some of this is learned, but more than anything iconic signifiers work, because they appeal to a much deeper level of meaning-making.
Louise Barrett writes that “this innate bias may not be for faces as such, but for the particular kind of geometric configuration that faces present.”
The face is a pattern that we recognize, because of the way our brains are wired. This is the thing that we understand, in order to understand something else.
Faces are, of course, far from the only pattern that we recognize. Edward Tufte has made a pretty good career of fine-tuning and optimizing the patterns that we use, to make meaning.
Here Tufte redesigns a before-and-after of New York train, time table. We can see in the corner up there is the before, and then the after is this thing that you see in the foreground.
Tufte removes what he calls the chart junk. The grids, the lines, the repetition. In terms of the responsive web model we were looking at before, this is the model we use to put these together, but it’s not the model that we use to understand it. It’s not what appeals to our perceptions.
What’s left in the lower version are the salient characteristics. In this case, it’s the groupings, the associations and the continuities that make sense to us.
These two concepts of differentiating between the model, and what the model does in a system, and pulling out these salient characteristics of signification are the things that I want to, in this next section, apply in thinking about IA and how that leads to a response of IA.
At last year’s IA Summit, Dan Klein and the very smart folks over at the Understanding Group put together a poster called, The Nature of Information Architecture.
In the TUG model, IA has three main parts. It has ontology, particular meaning, what Dan Klyn has in other cases referred to as “what we mean when we say what we say.”
I want to add one more layer onto this and say that it’s also the argument. It’s how we encourage users to think about the content or functionality, we’re offering.
The second piece of the TUG model is the arrangement of the parts, or the arrangement of meaning in and across contexts. In terms of the argument, this is how the pieces of the argument fit together. It’s a method of arrangement conceived to create a particular kind of understanding.
Now, the last piece of the TUG model is choreography, which are rules for interaction among the parts, the appropriate unfolding, as Dan has put it.
In my argument about arguments, I’ll argue that choreography must respond to context, in order to be effective and we look at why that’s the case.
To do that, I’m going to turn this model not quite on its head, but sideways and add some stuff to it. The first of these is content. It’s all that is possible to say about a thing, and this is the stock from which we distill our particular meaning, and derive our ontology.
From there we arrange the parts in taxonomy, and then we of course create the appropriate unfolding in choreography. Then of course we all know that the end goal of IA is to make this intelligible, to our users.
Intelligible content however is not our only goal. The reason that most of us have jobs is, because it matters how that understanding happens.
This is what we often refer to as the user experience of a particular design or implementation, but we can be more precise and call it qualia. Now qualia is an idea that’s been around for a while, but it helps hone in on something very particular that I want to look at in this model.
Qualia is a description of these subjective or qualitative properties, of an experience.
Qualia can be anxiety, frustration, anger, or fear, or it can be satisfaction, accomplishment, delight, or euphoria. It won’t be the same for every site or every experience. Skinnyties.com, for instance, is whimsical, spontaneous, self-assured.
The qualia for CancerCarolinas.org is going to be quite different. It has to be to meet its goals.
Let’s look at an example of when qualia’s effective. This is a site for Warby Parker. Warby Parker is a designer and manufacturer of custom eyeglass frames and monocles.
They have an online sales portal of virtual try-ons. In addition to good-looking frames, they also have a philanthropic mission. For every pair of frames they sell, they donate a pair to someone in need.
The website tells a story in a compelling and effective way. As the site unfolds, we get the narrative around how Warby Parker is building a company to do good. The qualia we get here is trust, admiration, and generosity.
Now, the generosity here isn’t my sense that they are generous, but it’s me being generous to them. The qualia is something that I feel. I want them to succeed. They’ve got me on board.
Now, the reason why Warby Parker is interesting to me is that, I saw the site, liked what they were doing, ordered a pair of glasses to try on, could not find anything that worked.
Their product failed me, but I was still a fan. I told my friends about it. “Oh, you want glasses? You should try Warby Parker.” They didn’t fit me, but I wanted them to fit someone, because they had gotten me on board.
Andy: I became an advocate even though I wasn’t a customer. If you’re doing work for e-commerce sites or anybody that has anything to sell, that’s huge to make your non-customers advocate for you.
Now, what I want to argue is that this introduces an additional element into our extended model, of IA—the medium. In this case a device.
Here I’ve represented the device as a prism. We can see why the medium is a prism by examining what happens, when that medium changes. Let’s take this same site we looked at, Warby Parker, and look at it on my phone.
We start off with a pretty well choreographed MDOT view, that moves over into a fluid view of what’s pretty clearly a desktop page, then finally defaults into a fixed-width desktop page. It doesn’t scale down. You have to pan around to move it to see what’s going on.
I don’t say this to pan Warby Parker. Not at all. I’m still a total fan. They’re doing great things and better at this than most, but the quality here is different. The story’s the same.
If I work hard enough I can get the story out, but I’m less receptive to it. We all know how hard users are willing to work to get at your content, which is about zero.
This highlights an important feature of our model. All the work we do in IA will be for little, if not nothing, if it’s distorted beyond recognition by the time it gets to the user.
Where do we intervene? Let’s move back through the model and see what can be done, and where. Some organizations try and intervene here.
Unfortunately, what usually happens is that you simply ensure that the experience will be bad. This is the mobile website for GreenChameleonDesign.com.
If you can’t read that, it says, “Choosing your creative agency isn’t like buying a pair of shoes, so why don’t you make yourself comfy and visit our site on a real screen?”
Andy: Brad Frost who tweeted this had this to say about that experience.
Andy: I can’t be sure, but I’m guessing this isn’t the qualia Green Chameleon was going after. I wouldn’t say that this drives me to acts of physical violence, but it is the mobile air equivalent of this.
Andy: Most of us have learned that this a shaky tactic at best, so let’s move back down our screen. We’ve already seen some cases where choreography can work, in responsive design.
There’s also the case in native design, where you will articulate even more differently, depending on the particular device prism in question.
We could look at choreography as something that can be split. If you’re designing for Android or iOS, you can have a different choreography depending on the needs of the device.
What I want to get at is that if we stop here at the choreography level, we’re shortchanging ourselves.
Taxonomy can also act as a point of articulation, a place where we can express ideas fluently and coherently. That’s the example that I’d like to look at.
Again, to reiterate, the responsive IA is an information design strategy that facilitates the expression of specific meaning, across multiple and independent contexts.
For this example, I’m going to apply some of the concepts we’ve discussed around the responsive grid, and around modes of signification to create something that I’m tentatively calling, a taxonomic grid.
There are four basic steps here—determining our narrative, auditing content in an existing organization, identifying and building single-dimension structures, and then articulating compound taxonomies to meet goals, accommodate constraints, and leverage opportunities.
For those of you that build the big IA taxonomies, that help determine how a site is structured and how it’s navigated, most of this will look pretty similar to what you’re doing, but there are a couple of key differences tied to how we understand information, and how we articulate it that I want to call out here.
I’m going to do this by applying it to another organization, for which I have a lot of admiration—826 Seattle. 826 Seattle is a non-profit writing center in my neighborhood in Seattle.
Their goal is to help kids in the six to 18 age range, improve their writing skills. They do this with a healthy dose of whimsy, exuberance, and creativity. 826 is donor-funded through largely volunteer lay.
Now 826 did a site redesign in 2011, and the site you see here is that redesign. It’s effective. It’s attractive. It meets the goals the center has for it. It’s not so prepared for the mobile context. This is about as good as you can get. Well, here’s their site on a mobile phone.
The goal for this exercise, will be to think through what needs to happen at the taxonomic level, to effectively articulate 826’s specific meaning, their ontology, to mobile and in the process to think through how work at the taxonomic level is necessary, to articulating a site’s particular meaning effectively.
We’ll start with our first step, which is determining the narrative. In general this involves the standard things that we do for discovery, so stakeholder interviews, traffic analysis, user research.
In the case of 826, I was fortunate enough to be able to talk to the associate director, and look at some of the recent traffic stats.
This is a project that I did for this presentation. The other ones I wasn’t able to share here.
What I found in this conversation was that of their four main audiences—students, teachers, donors, and volunteers—the audience that was most important for them for their website was the volunteers, because the website is how they reach and recruit their volunteers more than any other pathway.
We can see how they do this on a desktop site, by looking at how their structure’s laid out. They have a secondary nav up here on the top that has “volunteers” in that left-hand first spot.
Front and center, is a get-involved link that goes right to the volunteer section. Then under the “our programs” there’s lots of information for how volunteers can get involved.
All of this gave me in thinking through this exercise, a basic idea of what they were going after, what they were working to achieve. From there I audited their content, and I looked at their existing organization so the standard things we would do.
From here, from this content audit I was able to reverse-engineer the taxonomy, as it was presented on their site.
It was fairly broad. It’s fairly shallow, and what I found was a multiple-pathway structure. They had several ways for people to get to base-level content, about volunteering and about being involved in the center, but pathways that didn’t obscure the other paths for the different roles.
Up to this point it’s been fairly standard if not lightweight.
What I found in working in mobile and working with clients is this is usually, where they come to me and say, “Great. You’re done. Get it on a device.” If we stop here, we might end up with something that’s pretty close to this.
This would be a standard way to translate this site onto a mobile device, where we’ll have our nav. We can hide the global nav behind the little hamburger icon, and then we get our graphic header, features, programs.
We should ask, “What’s the argument we’re making here? What’s the qualia that the user experiences about this argument? Are we meeting the same goals?”
My answer is no, we’re not, particularly when we consider the way the desktop was able to reach potential volunteers. The desktop did that by using space, which we don’t have here, but that doesn’t necessarily mean that we have to choose between one, or the other.
Either we meet the goals of the desktop, or we meet some other goal on mobile. For that matter it’s often not even the same goal. I’ll argue that we can make this more effective by reexamining our taxonomy, so let’s go back to where we left off.
If we want to build this taxonomy out to be a responsive kind of taxonomy, our first step is to fill in the missing pieces of the conceptual grid. Flesh out the way that users are making sense of the underlying organization, and to do this, go beyond how users are making sense in a strictly symbolic way.
Here I broke the structure down into as many relationships and levels of granularity, as I could think of based on the user’s field of view. The actual scope is a bit larger than what we initially saw. What I’ve color-coded are the implied categorizations and the relationship types, of the existing taxonomy.
In this case we had relationships based on activity, audience, category, hierarchy, location, and time, and here they’re all intermixed. We see this on about any site.
Nothing’s one dimension. From here in order to think about how this translates, how this articulates, into multiple contexts, the next step is to identify and build single-dimension structures.
In this case, when I’m talking about a dimension, I’m referring to a single, simple organizing principle. The goal of the single dimension is to tease out the range of possibility for the whole, but to tease it out one piece at a time.
Deploy the single idea, so that the pattern as a whole can perform more effectively.
In this case the individual dimensions are aligned for the most part with the original relationship types, so we have location in time. A simple way to map, is it during school? Is it after school? Is it on the weekends? Type of activity, so is it a school visit? Is it a field trip? Then people in roles—students, parents, teachers, volunteers, and staff.
I’ve skipped over those quickly. This last one I’ll go into a little more detail and look at the nuts and bolts. The interesting thing here is, although it doesn’t come out in this particular example, when I’ve done this on projects we end up with a lot more dimensions than were initially present on the site.
They become sites of opportunity and places where we can say, “Hey, there’s a whole new way that you can approach this that you’ve never thought of, because it wouldn’t have worked on the desktop, or it wouldn’t have worked on wherever else you had implemented this website.”
This is also the place where people tend to get tripped up, because immediately they say, “Oh, this looks like facets,” but there is a difference between this and facets in terms of purpose and approach.
As facets are often a supplement to a goal-directed global navs, and have fairly low predictability. This kind of grid is based on semantic organization. It needs to carry an information sent, based on particular meaning.
The way we do that is by paying close attention to consistency and predictability, in the elements of the grid. In this case, it’s parts of speech, register, tone, granularity, mode.
These are the salient characteristics of our grid. These are the patterns that speakers recognize, even though they can’t identify them.
To come back to the idea of the grid, this is what is the equivalent of our columns, rows, groupings, and associations. Like Tufte, we’re not going to present those in the final design, but if we have attention to them here, we don’t need to. That’s not the selling characteristic. It’s the relationship that is.
This chart that we start off with 826. This is the role dimension. You can say that 826 is comprised of members of the community, or the 826 organizations, that’s national, who consumes services as students, parents, or educators. There is a consistency that operates all the way through this grid. There’s a level of granularity that’s consistent as well.
This is important, because in the next step, we’re going to use these dimensions to express a contextually-articulated instance of this grid. This is the culminating moment in this taxonomy exercise.
It’s deceptively simple. The idea is based on what you know about the site’s particular meaning and goals, on what you know about the constraints and opportunities of the prism in question, figure out a way to best combine the elements you’ve identified.
In this case, I’ve articulated a mobile taxonomy that is relatively narrow and deep. I’ve subdivided concepts by activity at the first level, and then by category more or less at the second level. Then, intermix in the third level, the role, and locations, specific relationships.
This doesn’t replace the desktop taxonomy. You might have a separate one for the tablet. That’s where the difference is here. Is it in each case, we’re articulating something that’s specific to the context.
What that means is that the users are going to look at it in one place and then, look at it in another place, and be presented with the information in a different order.
In an order that is consistent and regular, because we built it that way. It is optimized for the device that they’re on.
In the case of 826, their side on mobile might look something like this, where we have our nav. Then, instead of the features at the top, we have calls to action, and learn, and volunteer, and then about 826. It might look something like this where we have headers, and then activities, and then get involved.
In both cases, the taxonomy grid gives us a guideline to follow based on our sites content, and based on how users perceive patterns. This is where I’m going to end with this example. There’s not a big reveal of how I fixate 826’s mobile taxonomy.
They’re eventual approach, like all the approaches that we do, we need to do a follow-up review and iterations, stakeholder input, testing.
This specific example isn’t the solution for the challenges of mobile, but is intended to be a practical tool for effective experiences that are flexible and adaptive, and open to change.
In its discord being flexible and open to change that brings me to my last point, about responsive IA. Luca Rosati recently wrote that “embracing ambiguity, embracing the possibility of not understanding exactly how the pieces fit together, means designing systems that surpass our expectations of them.”
Our systems are only going to get more complex. Though, I’ve only looked at the web here, what I hope is that which you’ll take away from this talk, is insight into how we can apply the principles of modeling based on human perception, to allow for the expression of specific meaning across multiple contexts.
In closing and to sum up, a responsive IA is one that includes a rich understanding, content-driven guidelines, articulated information structures, and embraces ambiguity as a strategy for negotiating in the connected environment.
The goal of this approach, I think it’s a worthy one, is to hopefully create a future where the connected environment is a bit less like this, and a little more like that.
Andy: Questions? Yes.
Audience Member: Can you explain more about what you mean by dimensions, and the difference between that and ontology.
Andy: Oh, sure. A dimension, and not trying to be obscuring in using dimension, it’s similar to the idea of a facet. It’s a one way to look at a certain piece of data. The reason I pulled it out as a separate word is that I wanted to separate this idea, from the idea of facets.
Facets we build in order to help people get places. The idea of using a single dimension, the purpose is to organize concepts not necessarily how people get places.
What I’ve done with this, in the few times that I’ve used it and what the goal is, is to be able to pull out individual pieces of each of those dimensions, and then assemble them in a way that’s appropriate for the particular context, that you’re working on.
You could look at the collection of all the different dimensions, as an expression of an ontology. The ontology, again the [inaudible 36:41] to “what we mean when we say what we say”. It’s to your particular meaning. It’s the argument that you have about your content. Did I answer the question?
Audience Member: Yes.
Audience Member: You kind of mentioned that very quickly, but I wanted to know more about how do you deal with the content being restructured on a different platform, and a user is familiar with the structure on the desktop.
He’s looking for something very quickly, and he thinks that he would know where to find it. All of a sudden, he’s presented with a different structure.
Andy: That kind of question is what prompted me to think about, how we can change this process or what the underlying process needs to be to think about, structuring things differently.
I work in a space where I work on projects that are almost always responsive and articulated, across different platforms. The request that we normally get is make it the same. Make things in the same place.
What we see…The reason I used 826 as an example is that, as soon as you put all of these structural elements of the 826 desktop side onto a mobile phone, you’ve created a different story, because of space constraints you’re forced to reorder the elements.
You’re forced to put them in a different juxtaposition. The reason that I went through the different modes of meaning making is to try and paint the idea, that the words and their specific meaning is only one way that we make meaning in the world, and on the web.
That the juxtaposition of those ideas and the way that, not only physically, but conceptually, they fall into a grid and we make those associations is another way that we make meaning.
The first answer to that question is if someone said, “No I wanted to be same, because you’re looking for the same thing.” Often you won’t be able to find it anyway, because your meaning has been shaken up by responding to the device.
Unless you do something like shrink the screen down, which is then terrible, because then there are all sorts of usability problems there.
The second piece is if we look at a native applications, we can get some cues and that was another thing that tipped me off, because I’ve done a fair amount of native work as well. We can get some cues into how people operate with this.
I’m one of these nerds that usually has both an iOS and an Android device, on him. I found that moving between them, you use every note on the two devices for instance. At a certain point, they were pretty wildly different. They were respecting the navigation paradigms of each of the devices.
In entering in the each one of those though, the user brings the context of their familiarity with the device, and with the familiarity of how it operates.
For native applications, when you have something that is usually built first on iOS, and then map over into the Android, it’s usually pretty messy and pretty broken.
It also leaves at the door all the information that the user brings. The web is a little bit different, because across devices are websites tend to look more less the same. When we’re comparing the web on this machine to the web on a mobile device, we have a lot more options.
The corollary to that is if we take a mobile first option and we say, “We’re going to start there and then we’ll scale up to desktop.” What we see when we have desktop sites, where you now have the desktop site and there’s the hamburger icon.
You guys know what I mean? It’s the three horizontal lines, the hamburger icon, and that reveals your navigation.
You’ve got this desktop site that’s got all this white space. It’s got these huge spaces in it. Sometimes, it’s the aesthetic choice. It’s the design choice and it’s great.
Sometimes, it’s someone taking mobile, and now they’re mapping the limitations in the constraints of mobile, onto desktop and missing opportunities there in the same way doing this, when we translate the other way. Does that answer your question? Did I talk a lot?
Audience Member: Excuse me. Is site ontology [inaudible 40:38] ?
Andy: Yeah. One of the reasons that I started with the idea of responsive grid, was to give people fuel to take back and say, “Hey, this thing that we understand is responsive grid. It’s not a monkey with a crayon drawing lines. There’s another process behind it.”
Audience Member: It’s hard not try to think more about the practical example of 826 Seattle, and think about how the grid could make the design better, and what your screen examples seem to be suggesting is that you could have multiple layouts, depending on role.
Audience Member: I was wondering if that takes us a special kind of technology, in order to be able to do that. I was wondering if that’s a viable option for them?
Andy: Thank you. I was not suggesting that there would be multiple layouts depending on role. I was suggesting that once we have a multi-dimensional taxonomy that is not the final product, we’re able to change that to make it responsive into something contextually appropriate.
There are several different options available. The purpose of this and I don’t know if that came out quite, as well is the idea that this is one way that we can create a system out of a taxonomy is to have that large pool of organization, we can draw from and find the organization that works best.
The reason I showed two screens, was to show that there are multiple options available. This wasn’t a project that I finished with 826.
It didn’t have the input necessary, because I can’t do all this on my own to decide which one of those is best. It wouldn’t a case where I would suggest that they do a multiple roles entry point.
That’s a different kind of story than they’re telling on desktop. That’s a different kind of scope of project. Thanks for that.
We’re out of time. Thank you.