Bob Kasenchak Hi, everyone. Thanks for joining me today. I'd like to thank the conference chairs for the enormous efforts they've made to take this conference virtual. Obviously, I miss seeing all my friends and colleagues in person. And I hope that perhaps next year we can gather together once again in a physical location. But in the meantime, my name is Bob Kasenchak, and I'm a taxonomist. As a taxonomist, I'm concerned with naming and categorizing things. So I have to admit that the statement I'm a taxonomist isn't entirely true. Perhaps I should also say, I'm also an ontologist. And more recently, a knowledge graph. Bob Kasenchak Yes. This distinction is one I'd like to explore today, as this talk is about knowledge graphs, what they are, what they do and don't do, and how they help us to construct meaning by structuring information. Also along the way, the difference between taxonomy and ontology and knowledge graphs, and a little bit about linked data and the relationships between them. While ontologies and knowledge graphs are not exactly variations of taxonomies, or types of taxonomies. They are, however within the topic of taxonomy. And one of the problems I want to discuss is exactly the difference between topics and subtypes. And how confusing these things is one of the reasons to consider using knowledge graphs. Bob Kasenchak So the main question I'd like to investigate is, why knowledge graphs? And how do they help us construct meaning? And importantly, what are the limits of how they help us represent knowledge? In order to do this? I am delighted to say since I'm a taxonomist, with all the caveats that implies we need to first define some things taxonomy, the Saurus ontology and knowledge graph, but first, we need to back up and do a little framing. The reason I want to talk about the semiotics of knowledge graphs is precisely because I'm interested in how they do and don't structure information and data How we do and don't derive meaning from them. semiotics is the study of signs, including how we use signs to construct meaning. As such, it seems to me that semiotics is of great interest to information architecture as a discipline, and I'll know that I am not the first person to make this connection. Obviously, this is not the venue for a deep dive into semiotics. Bob Kasenchak So what I want to point out is that the way we organize information is full of symbols, and signs. And including indices and further all information representation is a kind of abstraction that's full of signifiers. That is to say, if I build a Knowledge Graph about, say, dogs, no actual dogs are involved. We only have signs and signification pointing towards dogs. Perhaps this seems obvious, but it's important when knowledge graphs and other information system purport to model how humans understand knowledge. We only push it around signifiers, therefore For some information and context is necessarily lost in the process of abstraction. So what I want to explore is what is possible and useful and ask what are the limits of how we can use electronic knowledge representation to represent human knowledge. Bob Kasenchak So, if I'm honest about the genesis of this talk, there's an off scene diagram that purports to show the represent the relationship between data, information, knowledge, and wisdom. And I dislike this diagram. I think it completely misses the difference between the semiotics of interacting with different kinds of signs. How is information giving meaning? How is knowledge given insight? why our context and insight and meaning separate and minor annotations to the structure? Does this really at all reflect the way we understand the practice our various disciplines in information architecture, information environments are far more nuanced and complex than this and human understanding even more, so. Bob Kasenchak Facts are fairly well organized and represented electronically. But any kind of broad social context is much more difficult to represent store and access. This slide shows a fragment of an ontology about the Beatles, which has some useful information. From this model, we could expect to answer questions like, Who were the members of The Beatles, or on what day was Please Please Me released or on which album was the song Love me do include it. What I can't do is include contextual information that would model the way we understand the social significance or influence of the Beatles. It's very hard or perhaps impossible to represent an answer to the question. Why were the Beatles culturally significant? Perhaps we could try to point to content that discusses this topic. But it's not that a question it's not a question that can be answered via electronically represented knowledge in the form of some kind of voice query, or easy one sentence answer. So it seems to me That is ironically as possible in a field called semantics. Bob Kasenchak There are a lot of opinions about the meanings of and relationships between the terms taxonomy, ontology and knowledge graph, and to Saurus. And like data. A taxonomy is a hierarchically organized set of terms describing concepts or things with defined broader and narrower relationship types. Often people, including me, say taxonomy when we mean to Saurus, as in practice, most taxonomies have associated relationships and other fields, containing other information about each term, alternative versions of terms definitions, scope, notes, external UI links, and so on. Really, a term can have as much information that is as many fields or properties containing additional information as you'd like. Some of these fields may contain or require values from a another smaller controlled list, or some constraints such as yes or no for Australia or five Are 12% which starts to edge towards the Knowledge Graph as it involves the combination of various controlled lists. It's also possible to encode a term in a taxonomy or thesaurus to some outside resource, say a human readable Wikipedia page, or machine readable data source, which starts to edge storage link data. Bob Kasenchak So it's already obvious why there's some confusion and why some taxonomists or the source psychologists are interested in ontologies and link graphs, Link data and graphs and so on. taxonomies into star II are great for some things and less useful for others, simply because the types of relationships between objects signs are limited in scope. Specifically, it's easy to confuse or mix and match subtopics and subtypes about more which a little later. Sorry, I lost a slide in there. Bob Kasenchak an ontology Now is a formal structure for modeling knowledge organization systems, including taxonomies and ESRI, Scots and owl, for example, are schemas for modeling and storing and transmitting and publishing and sharing vocabularies, terms their properties and relationships. You can model a taxonomy or thesaurus in Scots or owl, but not all Scots in all vocabularies are taxonomies, or thesaurus. This is because Scots and now will permit richer descriptions of the relationships between concepts and each other, or the properties that are required to model taxonomies unhelpfully vocabularies modeled in this way are also commonly called ontologies. One what they really are is something like ontologically modeled structures, which no one says since I made it up. Therefore, Scots and owl are examples of ontology that are models. Well, fibo and cabby are examples of vocabulary structures, structures modeled in this way and are also called ontologies. But Which I am recommending that we call ontologically modeled structures in an extremely recursive and unhelpful way. And ambiguous I might add in the industry and the semantics industry, we use the same word to refer to both the model and the model. So, for clarity, although I have little hope and influencing the world at large I will refer to this concept in this way as ontologically modeled structure or with the abbreviation OMS for the duration of my talk. In essence, then, a taxonomy or thesaurus is a type of lightly specified OMS which may or may not be modeled using an ontology formal structure. Therefore, in my taxonomy, all taxonomies are LMS but not all LMS are taxonomies and LMS may contain one or more taxonomies but does not have to, you could certainly build an ontological model system with no hierarchical relationships like a flat authority file a list of country names or organizations That model in Scotts URL, however many OMS is comprise a network of interrelated taxonomies. Further sasara with many term properties and relationships, including things like link data are edging up to be ontologically model systems, although they're not always modeled and stored as such. All of the above things are also referred to as a class knowledge organization systems. Clear so far. So knowledge graphs, a knowledge graph is, I think, a specific type of ontologically, a model system that includes a bunch of concepts or things as labeled nodes. They're specified formal relationships. That is the edges between the nodes, and information or properties about each term, specifically, including things like linked data or other kinds of links to external data resources, that is stored in RDF For queries and interesting knowledge graphs may include the automatic addiction addition of information to your knowledge graph, whether curated by a person or via some pipeline of information, for example, through pulling information from linked data sources or regular updates pushed from another database. Importantly, knowledge graphs also feature a way to infer or reason new information not expressly stated in the graph itself using what's called an inference engine or a reasoner. This one definition of knowledge graphs Bob Kasenchak mark this as the critical piece. Bob Kasenchak Another key feature of knowledge graphs is that they can be connected and resolved with one another because they're expressed in a simple and commonly understood data format Bob Kasenchak RDF. Bob Kasenchak If you're thinking Well, that sounds like an ontology. Well, well, ontologically muddled structure. Well, it is. There are it seems to me several schools of thought about this, there does not see yet seem to be a single definition in the industry. A Knowledge Graph. Regardless as a working definition, I'm going to use the bullet points I outlined on the previous slide as a working idea of what a knowledge graph is. So now that we have a little context, let's step back to the original proposition and talk about how graphs model knowledge or at least model information. In a taxonomy. You have connected terms via broader and narrower relationships, like the small structure on the slide. This is an excellent structure for some purposes like tagging content for retrieval. taxonomies are often arranged topically That is to say not strictly, but according to topics and subtopics. This example shows some broader narrower relationships and the taxonomic structure nominally about dogs. Strictly speaking, this is not the good taxonomy, canine anatomy, and dog food are not dogs. Therefore the implied is a relationship that the hierarchy shows us is weak. They are arranged topically. Bob Kasenchak God food is done is a dog Bob Kasenchak it is a subtopic of dogs. This is however fine for tagging and retrieving content in some contexts for web navigation, as people interested in things and the general topic of dogs could find the content they're looking for about canine anatomy and dog food for example. many, if not most, taxonomies are built in this way. According to subtopics, not strict taxonomic principles. As canine anatomy and dog food are subtopics of dogs. Bob Kasenchak This usually doesn't cause too much problems. Bob Kasenchak However, if you want to say in burn new information or build training sets for artificial intelligence, or answer questions with the voices system, this loose subtropical structure is inadequate, as some of the is relationships implied by the hierarchy are true and others are false. Since in a knowledge graph, we want to be able to make inferences or train an AI to answer questions ontologically modeled systems offer the cure specific labeling the The relationships between the things so instead of just implied is relationships we can explicate exactly what those relationships are that are only implied by the hierarchy. Specifically, we can arrange this information into the form of triples, subjects and objects connected by predicates. You will notice that every subject can also be an object and vice versa. For example, dogs eat dog food can be reversed into subjects dog food is eaten by dogs. Bob Kasenchak predicates can also be objects, Bob Kasenchak but let's leave that for another time. For now let's note that we can use predicates that already exists in the world through commonly used structures such as Dublin Core schema.org, Scots or owl, which makes them friendly for data sharing, or we can make up our own predicates as needed to describe our specific data. Bob Kasenchak Now we can arrange this information in a graph Bob Kasenchak using nodes and edges as shown here. The graph is then traversable as Or to infer new information. Why is it which is why it needs to be extremely carefully modeled. In this very simple example since we know that Lassie is a Kali and Kali is a dog we also know although it's not explicitly stated that Lassie is a dog, which seems trivial, but for very large data sets, it's much less trivial in this example shows. Importantly, most predicates have inverses that describe the relationship in the opposite direction. This is not always true, but let's assume it is for now. Bob Kasenchak There are definitely those who would call this a knowledge Bob Kasenchak graph, I think that something is still missing a connection to some kind of external data source. Although some definitions of knowledge graphs already satisfied, most require the inclusion of some kind of data. This could be and probably most often is, the assertion of links of some kind of some kind or another to other graphs ontologies or data sources, but this could also include links to content or other information. In our simple example here, we could link to Concept dog food to a shopping website, Lassie to a YouTube search for less videos canine anatomy to some repository of scholarly papers on that subject, and probably most likely dogs to the corresponding dbpedia or wiki data page about dogs all by asserting you ri links. This last bit is crucial because it demonstrates the power of link data and knowledge graphs, which allows us once the link is established to connect to and then include if we like other information about the topic from that source that we can pull into our own graph. Here's the first page of the dbpedia page about dogs which includes an abstract as you can see, which since dbpedia, is generated partially from structured data from Wikipedia usually incorporates the first paragraph or two from the Wikipedia page corresponding to this topic. A link to a picture and some external URL links to other resources about dogs further down the page. If Where to see the whole thing. We can find other information about species, breeds, publications, journals, and a host of other information, including the wiki data wiki wiki data equivalent topic from which we could find even more UI resources. Any of this information, this is the critical bit can be queried and included in our own graph. The reason this is the crucial bit since graphs are expressing a commonly understood and shareable format, we can link them together via equivalencies and other relationships. This should, I hope, further reinforced the need to use very, very careful and specific predicates to combine the information in our graphs. Since when we link data sets, we want to be able to make valid inferences, answer questions correctly, deliver relevant content, train our AI or voice assistant or correctly do whatever task it is we're building our knowledge graph for. The last important bit about knowledge graphs is that you need some way for people in systems to interact with your graph these interactions take essentially two forms, allowing people or systems to query your graph to get information, and letting people or systems add to or edit or delete or whatever information in your graph. Therefore, there are four modes of interactions to consider. Users who can read users who can write systems that can read and systems that can write. If you want people to be able to access your graph, you need to provide some mechanism or interface. This could be as simple as exposing a sparkle endpoint that users can query. Sparkle stands for sparkle protocol and RDF query language. And it's a fairly simple query language used to interact with RDF graph databases. If you expose a sparkle endpoint, your users can query your graph if they know sparkle. Obviously, this isn't necessarily limiting, not everyone's going to learn a query language. So if your audience were consuming this information is limited to people comfortable living with using a query language. That's fine. So here, this simple Simple. The simplest of all queries above shown selects all subjects, predicates and objects in the graph to display as triples. Now sparkle is fairly simply simple to learn. And if you're so inclined, I can recommend the O'Reilly book. But not everyone obviously is going to learn sparkle. So we need other information, we need other mechanisms for other people to be able to access our graph. If you want your information to reach a broader audience, say you have a graph to keep of information. To keep track of information about movies or something that you want to be generally available. It might make sense to offer some other kind of user interface. This can be as simple as a search box, or as complicated as some kind of visual interface. The genius of the Google knowledge graph of course, is that you can access it by accident by simply using Google. In addition to getting traditional Google results in the form of a list of relevant websites on the left side searches also ping the Google knowledge graph, which on the right side offers additional information in the sidebar, as we've all seen. I think that there are many other ways of offering interaction with graphs without requiring the user to know sparkle. And I think interesting solutions like query wizards that allow the user to perform sparkle queries without writing sparkle, in graphical and other kinds of graphical interfaces are on the way and this is a rich topic that deserves more investigation. Bob Kasenchak systems also need access to your graph. Depending on your use case, you might want to make the information in your graph available for querying by other systems. Since RDF is an accepted standard of data storage and transmission. Other systems can query your graph to discover information. This could be as simple as using the link data URL to extract information. If I want to know Sean Connery his birthday, I can write a query to get that from dbpedia. If I have a list of All actors who have ever say played a villain in a Bond movie, I can also write a script to get all of their birthdays and other information. In an alternative scenario, let's say I keep a record of logged in visitors to my website, and record what content they read. Maybe I also have metadata about which topics they often read, and which of my authors write on those topics. If I store all this information as triples in my graph database, it might look something like this visitor 123 viewed content this the content has a topic the author has a paper has an author and so forth. This information can also be viewed as a graph this is the same information presented as a graph. Perhaps I have an external system that periodically daily or hourly or whatever gathers this information for analysis and another application. Such a program could query not only the literal literal information stored in my graph, which paper did visitor 123 read But since we're dealing with RDF and sparkle query inferred information, what topics to visit are 123 most read about by connecting the dots in my triples, that is to say by querying information from the clap graph, including inferencing, and counting things. In addition to letting users and systems access your graph, you need to have automated and or manual processes to update your graph. Depending on what your knowledge graph represents, it could already be outdated. This is probably not true if your graph describes something like geological area areas that are relatively stable, but if you're tracking say which customers look at what products or content or which items you've sold, and what categories they're in, or who's publishing papers today, or any number of similar uses, that require real time data updates, you need to continually add new data to your graph. In either of the scenarios in the previous section. You may want to offer the same types of access points that is an endpoint that's graphical or a sparkle endpoint for a person But also for a system to be able to add or delete or change or whatever data that's in your graph. Updating records when bond villains die, well, the actors who played button villains anyway, or to add information about which customers you'd want content on which topics written by whom. In either case, the data has to be gathered somehow, and if not already in RDF format converted to triples, which obviously must conform to the schema of your ontology, and then added to the graph via some kind of pipeline. This is essentially the reverse of the process described above, as sparkle can also be used to insert information into a graph as well as query and read things out of it. So data that's not in RDF needs to be converted to RDF for insertion. Again, users who knows sparkle and have the proper access privileges can write direct information directly to your graph database. But for other users to do the same, you need an interface To offer an easy way to interact with the data and be to constrain the kinds of information that can be entered. This second bit is key of constraint, and I think somewhat under discussed. Bob Kasenchak When a human not a system wants to add, Bob Kasenchak remove, edit or otherwise change information in the graph. The most logical tool is some kind of ontology management tool or other vocabulary management tool that is native RDF on the back end. That is also amenable to link data and other kinds of data linking capabilities. Typically, ontology management tools offer a graphical interface for constructing, maintaining and publishing via export or API to other systems. RDF based ontologies. without writing directly in RDF triples Scots or owl, you Our eyes are using sparkle. That is that is to say that work is done on the back end, perhaps during schema development. But when the user is interacting with the graph, they do it in a graphical manner in which you don't have to think in triples or sparkle. those connections are made by the system with the obvious caveat that I'm biased as synaptic publishes such a tool. I will also note that I'm allowed to take screenshots of my own tool and so that's why I've included them here. The interface here is designed specifically to be friendly to taxonomists and other information professionals who are good with you eyes understand structure of information, but don't necessarily want to write in sparkle or owl. So another word on constraints. Triple stores are on the whole very permissive. If you know sparkle, you can add any information you want to the graph database. One of the reasons to use an interface on top of the graph database is to constrain the user to adding information only when it can be validated using some kind of schema or business rules. And constraints can include for example, a set of commonly used predicates and classes as well as the capacity to invent and assign your eyes to and publish custom classes and relationships as needed. With invalid RDF frameworks, again, I think the topic of constraint is a little under discussed and merits further discussion, but we don't have time to go too much further into it. Now. Lastly, on the subject of tools, and very briefly, such tools often offer various options for visualization of your graph. These varies from these vary from simple to complex and from static to functional or editable. The image below is from autotext graph DB product. So now that we've established some definitions, and some framing around the topic, let's talk about representing knowledge or information in graphs. graphs are useful and interesting. It's exactly because they're shareable, connectable, extensible and customizable. One of the things that I think I haven't emphasized enough, one of the things that makes graph so interesting and useful, and one of the things I hope that you'll take away today Is that in graphs, the relationships between things are first class citizens. I've heard this formulation several times lately and it rings true. So I'm amplifying it here. Relationships are first class citizens, not just the nodes, but the relationships between objects. Graphs allow for flexibility and nuance in modeling up to a certain point because of the expressive ality of the relationships. It seems like Lately, I've been hearing things to go back to my little pyramid diagram, like we're building a graph to model the sum of all human knowledge. Which brings me back to this diagram that I just like, while small graphs allow for some nuance, especially compared to other kinds of data structures. There are limits to what we can represent, which is why I think the topic is tangled up in semiotics. Let's take a look first at what graphs are good at. Like, I clearly can't go through every use of graphs. I'll highlight a few. One thing to keep in mind is that the graph So I've been showing us samples on these slides aren't necessarily quite small so that you can see them on a slide. real knowledge graphs in the wild contain thousands, millions or even billions of nodes and ever edges. I watched a presentation recently by someone who said that his graph has 40 trillion nodes and relationships. of the things that knowledge drafts are very good for. One is voice assistance. Siri and similar technologies are largely powered by knowledge graphs, along with other technologies. Of course, when you ask your voice assistant a question it translates your natural language query into a graph query and then provides the most relevant relevant results from the graph as nearly as it can. I'll come back to this example later as an I think it highlights some of the limitations as well. Enhanced internet search as we already looked at the Google Knowledge Graph. Bob Kasenchak Results from the Google Knowledge Graph are displayed next to quote, traditional web search results to enhance traditional website based search results with additional information. The genius of this, as described is that you don't have to do anything to access the graph. The graph is automatically access for you to provide links to relevant web pages, basic facts, images, and other information about the topic that you are asking about not just website results. How often do you access? How often when you do a Google search? Do you end up accessing the information in the Google knowledge graph on the right hand instead of the links I know I do it all the time. graphs are also very useful for AI training sets. Images are content that's already been tagged. With relevant metadata can be can be expressed in a graph as having those tags. expressing it in this way allows AI algorithms to traverse the graph to infer that similar content. In the case of documents, that is content that has similar words, in the case of image is more complicated. You have to do image recognition, but you can try infer that images that have similar images should also have similar metadata tags. This is great for document tagging. I've heard talks about this for the quick classification of large numbers of X rays given a training set, and other uses in the medical space, and allows for you to try and infer metadata for large datasets. Even if your training data is incomplete or low quality, it's able to get a lot of signal out of the noise. Bob Kasenchak In business intelligence, Bob Kasenchak graphs can be critical because they're being used for business intelligence. Sorry, let me start that over the piece. This piece is critical, because the way graphs are being used for business intelligence points towards other applications, which is largely the combination and integration of disparate data sets. Oftentimes, in large enterprises data is stored all over the place, and hundreds of databases with thousands upon thousands of tables, some on your enterprise servers, and some on local people's machines in Excel spreadsheets, and other siloed data silos. resolving these datasets via integration using a graph forces the standard it takes standardization of nomenclature and allows the data to be shared. A good approach I think, to start this kind of monumental task is to have specific business questions that you want to answer. How many orders did we ship today? And then build the graph around use cases, which can be completed when the graph is able to answer that question, and then you can add to it by trying to answer more questions and gradually expanding the scope. And lastly, for now, graphs are finding lots of uses in medical applications. This is an area in which well formed and widely used ontologies have been used for a long time, mesh Sno med and so forth. So it makes sense to try and classify X ray images as I described or classify or read doctors reports, aggregate research data sets, deliver content and model drug interactions from example is a nice case for graph modeling. Finally, and I hope what I've been leading up to I want to talk about the limits of what of what graphs can do. Like any concept that generates lots of buzz, graphs can be made to seem like a silver bullet. But graphs are only as good as the information they contain. And specifically, I think we have to be very careful about the power and limitations of predicates. For like any ontological structure, we must take care not to assume that everything is in the graph and understand the limitations of predicates. The moral of the story to give away the ending is that we have to be very, very careful when we model information. Some of this can be summed up by understanding the difference between the open and closed world assumptions. The closed world assumption assumes that anything not explicitly stated is false. You find this often in structures like relational databases, if I have a list of customers, and I asked it if a customer is in the list, and it's not found there, the expectation is that The answer is no. There is no expectation that there are customers that are not in the list. That's a very closed world assumption. We can use this example. We can ask a question from a database. If we have a suspect in our database, Bob is a citizen of the United States. We ask the question, is Bob a citizen of France? Since we don't have any data on this, the assumed answer is no. Although, in reality, I could have dual citizenship somewhere. I don't, but this is just an example. But this is by and large, how traditional types of databases are constructed and understood to constrain information and open world assumption, which is how ontologies and knowledge graphs work, have to assume that there are things we don't know. We express things that we do know in our data, but we can't assume that the data set connects everything. So we ask the question, is Bob a citizen of France? Just because we don't have something that tells us he is we don't have anything that tells us he's not so the answer is we don't know. ontologies and graphs must be constructed using this assumption. This is one of the hardest leaps to make when you're going from a taxonomic thinking world to an ontological world. So good graphs from which we can derive useful information that are carefully constructed. And well models with good predicates have carefully modeled information and predicates, the fuzzier the predicates, I think the less useful the graph. Here's an example of a fine model. We have, we can extract structured data from scholarly publications as it's Bob Kasenchak mostly an XML and model in this way. Bob Kasenchak This paper was written by these authors who are each affiliated with an institution and they wrote a paper on some topic that was published in a journal. The predicates are clear, there's no ambiguity, and all the information about the paper is contained in the graph. We can further extrapolate then, that these authors are co authors and that they publish on this topic. That's all fair enough and not stretching the boundaries at all. We can then construct a network of such papers Sort of like a social network that connects scholar authors by topic, co author, institutional affiliation in general published it, and we can see those connections. Since the predicates involved our concrete author of a paper paper was published in a journal paper has a topic according to some taxonomy. We can derive useful information such as asking questions like, who published papers on this topic in some journal on the last year with more than four authors, or who are affiliated with Harvard. We can also derive second order information, like a list of authors at the same institution that have similar research areas, but have never co authored a paper. As more information is gathered, it can be added to the graph, we can also connect the nodes in the graph to external sources. The note about Harvard could be connected to the Harvard website, or their Wikipedia page or a dbpedia Linked Data page with more information, authors with their personal websites, topics with websites about the topic and so on. This is pretty tidy. I think the predicates are an ambiguous and as assuming the data is correct, and there's really no dispute about who wrote a paper or in which journal and appeared. On the other hand, to use a special example, the notion of some artists or band being influenced by other is far more vague and I see graphs like this all the time and they kind of drive me crazy. I can easily make the assertion that the Rolling Stones were influenced by Muddy Waters, which seems true and is fair enough. I could even construct a list of other bands that Muddy Waters influenced or that influence the Rolling Stones or who in turn or who in turn Bob Kasenchak the stone or the who Bob Kasenchak or whom the stones in turn influence, Bob Kasenchak but that this information is not very well modifiable, what constitutes influence, what does it mean to say that one artist influences another? Does it mean that was their main influence? It was a strong influence, it was a primary influence are all influences equally strong? How would one come up with a comprehensive list of bands influenced by Muddy Waters or the rolling stones, those lists would necessarily be huge. We can see the open world assumption and operation. That combined with well being but vague predicate influenced by, we can construct a crummy looking graph that in actuality is incomplete at worst than misleading at best, I don't think it's actually possible. To construct a useful graph about musical influence. The topic is vague, the world is open, and the relationships described by the predicates are muddy. Perhaps again, this is a facile example. But I wanted to try and try and probe the limits have the descriptive quality of predicates when they're not concrete. And the cautionary tale, of course, is that much human knowledge is not modifiable in a concrete way. I think it's far too late and not nearly buzzword II enough to advocate for changing the name to information graphs, as that ship has sailed. But really knowledge as such, is very difficult to represent because I can make a graph about failed attempts to occupy Afghanistan but I can't stop you from thinking it's a good idea to get involved in a land war in Asia. Thanks everyone. Please look me up if you want to talk graphs taxonomies sandwiches or anything else Transcribed by https://otter.ai