Dorian Taylor I've spent a good chunk of my career doing these somewhat arcane one off internal infrastructure projects. I would say the key difference between product and infrastructure is that infrastructure is what the organization uses for itself for its own purposes, and so tends to be a lot more nebulous as to where its boundaries are. What I mean is, whereas a product can be clearly delineated as, say, an app or a website, infrastructure is kind of like connective tissue that goes in between the different parts of an organization and it has less of an easily recognizable shape. Nevertheless, software is software supposedly, and there are known ways to build it as they say. Dorian Taylor Generally, it involves isolating identifiable components and creating them in isolation, then assembling them upward. None of this should be controversial. How However, there exists the problem of resource control. Management always wants to know how long this process is going to take and how much it's going to cost. But in order to answer these questions, which in our line of work is usually just the same question asked from two different angles, we need to have a lot of information about what we're trying to do. By this, I mean, if you're rolling out vanilla Drupal website number 472, there was a lot less uncertainty in that process, then some brand new product service, or infrastructure. So this story starts with me ultimately trying to deal with the mundane problem of deadlines, while being too naive and inexperienced to understand that the accuracy of your estimates is a function of the amount of information you have about the problem you're trying to solve. And nothing else. Certainly not your prowess as a developer. If you're going to make your deadlines you need accurate estimates, which means you need adequate sample data, which means you need to map out the concrete process of implementing your work product, which means you have to have a very clear idea of what the heck you're even trying to do. And determining what the heck you're even trying to do is itself a fraught process with a number of technical and at least importantly, political interactions, not to mention ethical and other concerns. organizing all of these goals and principles in a way that there is consensus among stakeholders is essential to achieving the conceptual integrity that affords not only a coherent and functioning outcome, but also more pedestrian concerns like getting the job done on time. So what occurs to me and what has occurred to me for some time, is that in our line of work, and potentially others, we need a way to record, reconcile and communicate a gradient of concerns and whose concerns they are in an unbroken line from what is desired to interact. with the outside world, to what is necessary, internally to function. In 2007, I was working in a lab at an antivirus company. And what we were trying to do was create an infrastructure that would scan the entire internet, looking for machines that send email that shouldn't be. This sounds like it would be sophisticated, but the programming was actually quite straightforward. In fact, I would call it a design heavy project, even though there was almost no user interface. There were a zillion ways we could have organized this thing. And we had to come up with exactly one. Indeed, the whole user story framing fell apart because most of the project had no user to speak of besides us. In the midst of all of this, I developed a technique I called behavior sheets. Suppose you're going to write some program or module or other you take an ordinary In outliner, and you just start bashing out bullet points, each time asking the question, what must the program do, what must not do and under what conditions. Then once you've exhaustively mapped out your visitor rata, you eyeball for hours, slices of it, and then you add those up. And that is your estimate. The estimates with behavior sheets were fantastically accurate, but the technique had limitations. First, it was intended to be the penultimate step before coding. The idea was you would capture all the fine grained behavior you needed to express in the code. Dorian Taylor This made it only amenable to describing small discrete modules or programs, that kind of thing that would take a developer a week to do tops. It was completely useless for time estimation because it would take us long to do the behavior sheet as it would to write the code afterward. So while this exercise absolutely helps to organize one's thoughts, and it does produce accurate time estimates, you can't use it for forecasting. Because in order to justify the cost of doing a behavior sheet, I would need a behavior sheet for the behavior sheet and so on a Zeno's paradox of behavior sheets. But this might not actually be a bad thing. Developing behavior sheets is valid work and deserves a budget. We could imagine them as a rung in a ladder of increasing specificity. Our Zeno's paradox can actually manifest as the following gradient of relations between categories of concerns, disciplines and techniques, business goals to user goals, that is stakeholder user research, personas, etc. User goals to user tasks that would be scenarios are mapped to those users tasks to system tasks that would be system architecture in the large system tasks to system behaviors. This is where behavior sheets themselves would fit in system behaviors to source code that is the actual act of programming and source code to running code. This is dealt with the computer through compilation and execution. We can imagine this laid out as a hypermedia artifact built up over time as information accumulates, starting from the coarse grained outward facing concerns and forging its way into the technical minutiae. This idea wouldn't come to me However, for another several years because I was still fixated on the relatively parochial cost estimation problem. It was around this time in about 2008 that I finally took a look at the architect Christopher Alexander. If You've heard of him at all. It's probably because of his epic, thick tome called a pattern language. But in the first book of his I read is actually his second most well known, which was his PhD dissertation called notes on the synthesis of form. Alexander's literal actual thesis, is it the way that you solve complex design problems is by recursively, breaking the problem down into smaller problems until you have a set of problems that you can solve, and then you put them back together. The dissertation describes the mathematics of precisely how you do that. Dorian Taylor The model represents fairly fine grained design concerns, or what Alexander calls fitness variables as nodes in a graph with their interactions represented by links between them. Each node can be in a state of being satisfied or unsatisfied. And the links mean that changing the state of one node means changing all the other ones that are connected to it. And that's all nodes that are connected to them and so on. Alexander imagines and infuriating contraption a huge Bank of lightbulbs with their respective switches, all in a random state of honor off. Hidden wiring behind the rig makes it so that toggling any switch affects a number of other bulbs besides the one that is associated with. The only way to get the switches into the same state is to toggle them all at the same time. The book then goes on to describe a way to dis articulate this hairball by finding partitions that cut the fewest amount of connecting lines. And you do this of course with a computer, which was unheard of at the time, deriving a project plan from the topological structure of the requirements that is, even to this day, this still only kind of happens. To wit, one of Alexander's most important insights in my opinion, is the tendency to group project requirements into arbitrary categories. He was like, your labeling system might be perfectly sensible, but under the hood that categorization is acting like a partition that slices through way more lines than the optimal partition would. This yields a sub optimal project structure which yields a sub optimal product, to which I add likely to be more costly and more onerous to carry out. Alexander's reasoning is quite astute. A conventional category maps to a concept, which is necessarily has to be articulable as a word. Well, to be sure, there are lots of words and even more compound terms. There aren't nearly as many as the possible partitions of even a modest design problem. And well, there's enormous number of possible partitions, the number of ones that are actually good is vanishingly low. So the odds that a tidy sounding category like acoustics for an architecture project is going to isolate the correct subset of the process to carry it out are actually pretty unlikely. Dorian Taylor Now, I thought this was great minimum cut graph partitioning, which is what this is called. and computer science is a well researched problem that has come a long way since the mid 1960s, when the book was written, so all I needed, I thought were some fitness variables and their interrelationships. And I can apply an off the shelf algorithm and bada bing bada boom, I have a topological correct project plan. What I determined was something I expect Alexander must have concluded as well. This arrangement is totally unstable. When you introduce a new fitness variable and hook it up, it's bound to completely rearrange the project plan. This would have been fine in 1964 when computer time was expensive, and you'd only run the program once. But we can imagine these fitness variables being pulled together in real time. Alexander was clear clearly aware of this because he abandoned this technique in favor of the patterns most people are familiar with. And he eventually abandoned those two. And it's because they're only really fragmentary perceptions of this deepest structure that I'm describing that they are ultimately unsatisfactory, I think not capable of delivering the goods. Dorian Taylor So about a year later, in 2009, I was watching this salon at Google that featured Doug Engelbart. At one point, he made this throwaway remark about structured argumentation in the context of his own work. So whole movement for the last 20 or 20 years, called structured argumentation is extremely significant kind of thing that, you know, this phrase and to this thing, they together make an allegation about this, and this is countered by this and this is untrue. Part of which was all about no less than augmenting human intellect, dating back to the 1962 paper of the same name. And go Bart's design philosophy has always been about externalizing and reifying networks of conceptual entities and specifically harnessing the unique properties of computers to rapidly manipulate and rearrange them to generate new insights. to him. argumentation was just another property of his n LS project, which he so famously demonstrated in 1968. Research Program The research program that I'm going to describe to you is quickly characterized by saying, if in your office, you as an intellectual worker, were supplied with a computer display, backed up by a computer that was alive for you all day and was instantly responsible response to instantly responsive every action you had, how much value could you derive from that? Dorian Taylor Structured argumentation can perhaps be best described by contrasting it with unstructured argumentation, the ordinary discussion that of course happens in meatspace, but also on web forums, on Twitter and in the issue reports, and more importantly, the comments sections of bug trackers like GitHub and JIRA. All of these examples of people getting together to interact and share information. Some subset of this activity leads to decisions and subsequent action. The digital examples here are particularly interesting because a lot of the time they create a distinct record. every tweet has its own URL. every issue and subsequent comment on GitHub Bugzilla or JIRA is absolutely represented on the back end is a record in a database. In other words, these are identifiable entities who are For the opportunity for computation, I would argue stops there. For instance, while an issue in one of these systems has special status, the comments do not I can respond to an issue in a bug tracker that just says I like nachos or whatever and contribute nothing to the actual resolution of the issue. Structure argumentation augments this already quite sophisticated hypermedia environment by ascribing types to the entities and constraining the relationships between the entities by a predetermined set of semantic relations. This creates a ground state of expectations for the content and thus imposes a set of procedural constraints on what is ultimately associate political process. In other words, structured argumentation gamified rhetoric by dictating which moves are legal just like pieces on a chessboard. Perhaps the most cohesive form of structured argumentation is ideas, or an issue based information system developed around the same time as Engelbart by the design theorist Horace retel. up the road at Berkeley. It's worth noting that riddle was at Berkeley around the same time as Alexander Rendell was interested in what he called wicked problems which he defined as such. A wicked problem is not easily defined, there little consensus among stakeholders requires complex judgments about how to define has no clear definition of solved does not have a right solution, just varying degrees of better or worse solutions has no objective measure of success. every attempt at a solution changes the problem invest requires iteration and it often has a strong ethical or political cost of failure. Dorian Taylor Wickedness is a property in problems that transcends Since being nearly hard or important, for example, climate change is a wicked problem. global poverty is a wicked problem. And I would argue the short term response to the current Coronavirus pandemic is not a wicked problem, although mopping up civilization afterward absolutely is. And the reason why the pandemic despite being a very large and important problem is not a wicked problem is because it has an obvious if costly solution in the short term. So to tackle wicked problems, Rachel and his collaborators devised a game, which they called itis. This game was played in groups and carried out on index cards. Participants in the game are allowed to generate only three kinds of element, an issue which is exactly what it sounds like some state of affairs in the world that needs something done about it, a position which is a candidate from what if anything to do about a particular issue and finally, another argument which is why a particular position should or should not be adopted. These elements connect to each other through a small number of predefined semantic relations. For example, a position must respond to an issue. An argument must either support or oppose a position, but anything in the system can suggest or be questioned by a new issue. When I looked at the structure, however, I was like holy shit, issues as defined here are functionally the same thing as Alexander's fitness variables. Remember those? It should be obvious to anybody watching that carrying out an operation like this with index cards has its limitations, in no small part because the structures created by this process quickly get out of control. There are therefore been numerous attempts over the years to create software applications that encode Ibis and similar structured argumentation techniques. Because if you have the skills and know anything about them, This technique digitizing it as the obvious next step. That said it took about two decades for the hardware to catch up to the theory. So the first one of these implementations was by Jeff Conklin and his team in 1988. It was intended to be run on a network of sun workstations. Conklin is still around, he runs an outfit called the Compendium Institute, where in addition to consultant with his own process, he calls dialogue mapping host a software product, which is the maturation of Conklin's work, beginning in 1988. There have been a number of other companies and products in the interim that implement the same basic design. It's worth pausing here for a second to ask why. Such a software system which is actually quite simple to implement, and has been around in some form or other for over 50 years has not been more widely adopted. I suspect two reasons. Fundamentally, Ibis is a game with Rules just like basketball or chess, all of the participants need to know the rules and adhere to them. In order for the outcome to be coherent, the computer is only there to speed up the boring parts, it can't do much to enforce the rules in spite of the people using it. So unlike slack or Twitter, or GitHub or JIRA, which if you squint hard enough resembles something familiar. itis is a completely novel process that does not have roots in us that email or IRC. When you look at the product offerings, this is number two, even going back to the original Conklin design, one of the things that stands out is they're all very platforming. If people want to participate, they have to get onto that particular platform, there's no way to federate them. Moreover, structured argumentation generally depends on evidence and you can't just point to the evidence, you often have to attach it. And so then you have to manage files and whatnot. Finally, you can't export the resulting system. structure in any useful way, while there's some value in the insight you gain, it's a little bit like okay, now what? Dorian Taylor I suspect that this confluence of a novel and quite strict process answered by only platform like solutions is probably why you haven't heard of it. Or if you have heard of it, you'll probably come to a similar conclusion. What was evident to me was it the data, the argumentation content, needs to be able to move across administrative boundaries. Not only was it important to be able to cross administrative boundaries, it was also important to be able to mix the argumentation content with other data types, an issue would necessarily be about something and likewise, positions and arguments regularly need to cite their evidence and that could be anything erstwhile digital incarnations of Ibis afforded file attachments, which is a good start, but the obvious thing to do Do is linked to other materials over the web. And you're going to want to display these objects differently depending on what they are, which means you're going to need a way to say what they are. So I submit that the technology best suited for what I just described is the Semantic Web. Under this regime, all entities in the system are addressable by your eyes whether you host the contents or not. terms for expressing other kinds of data objects and the semantic relations between them can likewise be plucked from the trees. This means I only have to think about representing and maintaining the core Eidos vocabulary. Those familiar with RDF will know that it supports an inheritance model very similar to object oriented programming. If a resource is described a type A, and type A is a subclass of type B, then the resource in question is implied to have all the properties of type B The really special part about RDF is the same goes for semantic relations. If a connection acts between two resources as a sub property of connection wide and that meaning of y is implied by the assertion of x. This turns out to be incredibly powerful. Because the classes of entities in Ibis are fundamentally conceptual, I decided to make the core classes inherit from the simple knowledge organization system otherwise known as Scoss. For the unacquainted Scoss is a vocabulary for expressing concepts schemes and taxonomies and has terms for expressing For instance, if one concept is broader or narrower than another. The Ibis vocabulary inherits all the types and properties of Scots and can therefore be woven directly into a Scots taxonomy. This is important because there are a great many concepts in the world that are just concepts with no additional indication that something ought to be done about them. I want to highlight something else before I close out this segment. The specification for this vocabulary is simultaneously machine and human readable. While you're presented with what looks like an ordinary web page, all of the information necessary to construct an automated reasoner is embedded in the markup. What's more, and this is important, the terms are only ever declared once. So it's impossible for the human readable specification to diverge from the machine readable one. This is a very straightforward technique, and it's submitted unique to semantic web technology. It took me over another year to get around to finally writing an app that would make use of this vocabulary. It wasn't really a priority until I needed to test something in a different project I was working on. So over the course of two weeks in mid 2013, I wrote this: Dorian Taylor I want to caveat a little bit I go much farther. This thing is kind of a laboratory story with about half a dozen experiments going on concurrently. So this particular implementation is not something I intend to productize I do have or intend to do a full rewrite sometime later on this year. Okay, so perhaps I should explain what you're looking at. Every page represents either an issue position or argument. This is a different approach from what you normally see in these tools. Rather than display a canvas with icons on it. I wanted the subject to be the page I'm looking at. The lozenge on the left represents the subject and you can see that there are fields in numerating the different immediate neighbors ordered by the different kinds of relation. As you can see, everything is color coded red for issues green for positions and blue for argument. The relations are likewise color coded. Over on the right is a visualization of the subjects neighborhood which is highlighted to me match. I've been using this tool on and off for planning projects since I made it in 2013. Well, the original Ibis specification was intended to be collaborative, I've mainly been using it to reason privately over a given problem space. And then I tend to communicate the conclusions through some other medium. I've been doing this mainly because of the specific clients I've been working with. What I found working with this tool is the three closest things to resembles are an outliner, a bug tracker and a concept mapper. So perhaps I'll use those as a jumping off point. One of the things has bothered me about outliners is their tendency to be strictly hierarchical. Any item in the system can have at most one parent and I find my use of an outliner limited by this as my ideas converge as well as diverged. hierarchical also means you have to start at the top. But we know for example, by the work of George Lakoff about basically categories which operate at the genus level. So one of the things I think is really important is a tool like this is the space does not impose any kind of orientation on you and you can start in the middle. Indeed, with this thing, you're always in the middle no matter where you are. Dorian Taylor Bug trackers definitely exhibit a collaborative component. And indeed, their primary objects are issues. So in some way, they're even more similar to a tool such as this. However, bug trackers tend to assume you're using them for software development, and so they have a lot of accoutrements to that effect that don't translate over to well to other disciplines. More importantly, though, I want to zoom in on the issues in the bug trackers themselves. I'm inclined to say that an issue in a bug tracker is pretty much the same kind of conceptual entity as an issue and it is, indeed the ability to mark bugs as duplicates or dependencies of other bugs is structurally very Similar to what goes on in itis. But aside from that there really isn't that much structure. You can have comments, which can be sources of lively discussion about how to tackle the bug and why this or that solution is a good idea or a bad one. But it's all freeform text, there's no way to tell a comment on a bug is without reading it. And as such, you can't organize bug comments much past putting them in chronological order. And of course, there's threading but this is the same problem you see in tools like slack threads split off into tree structures, we don't see very many features in these systems that allow us to tie the ends or even the middles of the threads back together. This complex non orientable topological space is something that only a computer it can handle. I argue as I did the last time I was at the AI summit that we still aren't taking full advantage of it. As I argued then strict hierarchies in orient ability are constrained. paper. In fact, I think we can credit Facebook of all entities for being the first major company to socialize the use of the word graph. Because that's what kind of structure This is, in my opinion, we should be celebrating this kind of structure, not trying to hide it in a misguided effort to make things quote unquote simpler. So concept mappers do tend to be non hierarchical, but they're just pictures, all the information is embedded in the graphical layout of the elements. You can't do anything beyond just look at it. I should mention here as an aside that there are two nascent tools that are gaining traction and these would be notion and Rome. I mentioned them because they do exhibit this non hierarchical structure, but they're doing something which is again a little different, even from each other. The main structural difference of the tool I wrote versus all of these others, is that rather than set out to create an app, I set out to create a data exchange vocabulary on top of which arbitrarily many apps Could be constructed. I will level with you, I kind of wrote this tool on a lark, I was actually trying to solve a different problem. Namely, I was designing a protocol to help create quick and dirty semantic web apps. And I needed a way to test it. I needed a complete data vocabulary to create the app around. And my Ibis vocabulary was just sitting there ready to go. So in part because of that, and in part because all the other times I've used it contain confidential information. Now I'm going to use the tool to tell the story of its own inception. The protocol I designed is about an abstract thing as you can get. I wanted a way to make a quick and dirty semantic web apps and my problem was that I had no satisfactory way to get new data into the app. The only way would have involved stitching the data gather with some cockamamie JavaScript contraction and sending it over the wire. With Ajax, which would have been dirty but not quick. Instead, I designed an extremely lightweight protocol that reduces to embedded commands into ordinary HTML forms. So if you want to change the behavior, you just have to change the markup. So I have this abstract asset, and I needed something concrete to test it with. Enter the Ibis vocabulary. bringing it to life as a tool was the entire point after all. The initial write took about two weeks mainly because of that graphic on the right, which I'll get to. But after using it for a bit, it became clear that the vocabulary which I translated verbatim out of the Conklin paper needed some tweaking. For one I was finding arguments to be a special kind of issue which was unexpected. When you write down an issue, you're saying roughly that this thing out there in the world and the implication is that something should either be done about it or it should be steered around. Then you write a position responding to that issue which has an explicit rhetorical valence to it, namely, something like we should do this particular thing about this particular issue. Dorian Taylor You then write the argument in response to that position, citing another state of affairs out there in the world that asserts only through its semantic relation that the position is either a good or a bad one. And this was revelatory, because I could only really get it through using the tool for a while. The text of an issue or an argument should be written as a declarative fact. It's the metadata that tells you how to interpret that fact. It's positions that are written in the imperative. In Ibis you don't explicitly say a should be B, you say a exists, and then separately, you say turn a into b. And then somebody else might come along and say B has this undesirable characteristic. To which you or somebody else entirely may respond, turn a into C. This is tricky, and this is why you need training to use this tool. You'll need it to practice crafting the language so that it complements the meaning conveyed by the formal system. It also for me made it clear that argument was really a subclass of issue. And so I made that change in the vocabulary and in the tool. There was a similar change for the relation questions, which I found to behave like a sub property of suggested by the original paper did not express any kind of relation between these two terms. However, when you use the tool, you quickly notice that if an issue questions another element in the system, then that element can implicitly be said to suggest that issue. So mainly for the purpose of automated reasoning. I added that inheritance relation to the vocabulary. Dorian Taylor I'm going a little bit out of order here, but I wanted to show you first One of the immediate consequences of putting this tool to use was to provide feedback on the date of vocabulary used to describe its own content. So now let's turn our attention to the user interface. It was really important to me to design this tool such that each entity was directly addressable over the web so that other information systems could link directly to it. I also wanted to be able to situate the user in a space so that the main interface is almost like a room. With a list of all of the directions you can go not unlike the old text adventure game like zork. In contrast to other structured argumentation tools, any synoptic view, like a map overlay in a video game is secondary. The next challenge then was to graphically differentiate the subjects and their semantic relations. I should note that not only am I teaching myself how an Ibis tool ought to behave, I'm simultaneously teaching myself techniques for designing dyed in the wool semantic web applications. Since all the data in this tool is represented in RDF, the actual tool itself is just a thin rendering layer. Here, I would be remiss if I didn't turn the semantic data out onto the page in a machine readable way. So I did that using RDFa. This subsequently enabled me to co op the embedded semantic metadata for CSS selectors. Here I assigned a provisional palette to each of the classes and properties, and then use sass to generate the selectors. themselves turned my attention to the input, which I should preface that I'm not at all satisfied with. I was intent on having the interaction between like, really, these entities should consist of a single complete sentence, and I wanted to be able to batch the sentences out and keep going. I added buttons to disconnect the relations as well as change the type of element of the current subject If it was assigned an error, and to connect existing element together, and this is technical, again, I just created a second form and I toggle its visibility with jQuery. The input every input actually, is it plain HTML form which is interpreted on the server side using the protocol I design. Now I suppose the elephant in the room is this monstrosity of a graphic that takes up the right half of the screen. I should note that this is actually my second attempt at resolving this issue and I can credit one Martin Pinsky. With the original design of this and the one that preceded it. My challenge was to visualize the neighborhood of the current subject so an onlooker can get a sense of its relative position and orientation. My first attempt involved in adaptation is fence GIS hive plot, which while it looks really cool was unsuitable because changes in the content could drastically change the aspect ratio of the graphic, which meant I couldn't reliably oriented on the screen. So I fell back to his older circus design, which I opened up on one side in order to fit better with the content. And it is okay, for now, I should add as a technical point that you can embed RDFa into SVG as well as HTML. So I did this here and just reuse the same palette CSS, as it was applied to the HTML. I want to reiterate that what I've found in this process is when you're working at the problem, the issues positions and arguments get more and more technical and less interesting to upstream stakeholders. So one can imagine an ideal synoptic view could just bundle up all the elements below a certain threshold and just express their number or something. I'm gonna have some time later on this year to spend at least a month or two exploring what this might look like. I want to mention one more thing before I conclude this demonstration and that is this little secondary graphic over in the corner. This is perhaps the least well developed aspect of this prototype. I threw it in one afternoon on an impulse sort before deciding to abandon any subsequent development on this particular implementation, this is just a vanilla scouse concept scheme. I was finding through the use of this tool that I want it to be able to relate Ibis entities to concepts that had an essentially neutral rhetorical valence primarily for the purpose of organizing them. I similarly added a box for arbitrary External links, although never got around to creating the additional plumbing to generate previews that will have to wait for the rewrite. Dorian Taylor One really trenchant insight from Christopher Alexander in notes on the synthesis of form is that designers will generally agree that a particular design concern is valid, and where they will disagree is the degree to which a valid concern is important. What happens in my experience with conventional hierarchical project management structures? Is it the person holding the keys will say something like, Don't even write down that concern because it's not a problem. Even if the concern doesn't represent a significant cost, it represents a cognitive cost to the person who's trying to plan the expenditure of resources and trying to determine what they can promise upstream. And I think this is something we can't separate. Like, we can't look at the problem of organizing the nuts and bolts execution of a project in a vacuum separate from the organizational politics. The representational artifacts we use to guide our way through projects have to be legible to different people at different resolutions. However, what amounts to censorship at lower resolutions or higher altitudes, if you like, has a masking effect on finer grain decisions. What do I mean by censorship? I mean in the sense of abridgement of super ordination of simplification required for consumption by executives who have very little time to absorb any nuance, and the discourse eventually reduces to deliverables and deadlines which conventional project management tools are only too eager to represent. Project Management Tools indeed Connect tasks to people to schedules, and even compute dependencies. However, they are still fundamentally hierarchical. Furthermore, the basic unit represented is the task, that is to say the process, which routinely gets conflated with the outcome. There's also the rather mendacious fiction that the tasks are uniform and can be counted as a sort of percent complete, which has all the accuracy of a download progress bar on a hotel Wi Fi connection. And this is important because this is what you're communicating to stakeholders that have knew the time nor the expertise to adjudicate on finer grained details. This is what you're promising them. And if there's any confusion or disagreement, this representation is going to be the reference point for what was promised and when delivery was expected. It makes sense to me that these tools to reflect the actual structure of the process which has irreducible complexity that I suspect couldn't be rendered neatly on a Gantt chart, but I don't think we should be afraid of that. I think we should embrace the complexity, and tell the story to our stakeholders of how we meet it head on and surmounted rather than cover it, often to our detriment, with a veneer of legally binding artificial simplicity. It's worth remarking here as an aside, that Gantt charts were invented for the purpose of managing military and civil engineering bridges, dams, and of course, roads, to say nothing of the fact that the people have been building these structures for thousands of years and have very pretty good idea of how to organize them. The critical path of a road building project is a literal, physical path. By contrast on digital projects, so much of The job is just figuring out what the job even is. I think we should be honest about that. Conventional project management tools require us to over specify not only the composition of the project upfront, but also its sequence. I actually believe this has an adverse effect on the outcome. Remember this diagram. Dorian Taylor I argue that conventional project management techniques do this to the structure of the project. They cut arbitrarily across the interrelationships between the requirements, or Alexandra's parlo. Fitness variables are in riddles, issues or in mind design concerns, then they dictate the order that things are supposed to be completed in. The problem with this is that while the obvious dependencies may be accounted for, the less obvious ones are censored until you encounter them in the middle of the project, then you're stuck with the ugly dilemma of either ignoring the concern assuming you even can or incurring an overrun. more modern project management techniques model a project as an unordered tool of tasks, and simply proffer something like a burndown chart to tell the story of progress to onlookers. I think this is better, but it still assumes a fixed number of tasks at the outset. It doesn't afford the injection of new tasks. And yes, the conceptual model is still focused on tasks. Only practitioners who are directly involved care about tasks everybody else cares about goals. This reminds me of my favorite Polo quote from Herbert Simon's 1969 book The science is art of the artificial, a paradoxical but perhaps realistic view of design goals is that their function is to motivate activity, which in turn will generate new goals. In other words, we need project management tools that can accommodate the ongoing injection of new information because that is what we are doing all day long. We need a way of telling this story and getting the money people to understand this as a reality of the medium, rather than being an embarrassment we try to hide from them. We're already halfway there. What is a goal anyway? It's a state of affairs in the world that you want to exist. In other words, it's the same thing as an issue, or rather, a mirror image of one. A project can be viewed as a collection of interrelated issues, fitness variables, design concerns and their concomitant responses. I'm exploring this method of structured argumentation for representing projects where there's an unbroken line, from business goals to user goals to user tasks to system tasks, to system behaviors to running code. Wouldn't it be interesting to create digital systems this way? Transcribed by https://otter.ai