No Conspiracies Here! Clear and Transparent Taxonomy Foundations
Taxonomies and ontologies are foundational components in the training and refining of machine learning models, including their use as a source for tagging machine learning training sets and as institutional reference for large language models. Semantic models, however, are frequently hidden from most users due to their complexity and necessarily strict governance models. Taxonomy and ontology management systems typically have few editing licenses and read access, when available, is reserved for business partners who understand how to navigate complicated platforms. Systems consuming taxonomy values are an abstracted layer, presenting concepts in navigational structures, as typeahead values in search boxes, or as limited flat lists for tagging content or assets. The overall semantic structure, potentially composed of many taxonomies connected by robust ontologies, is not always available for the end user to view as a whole or to interact with directly. Hiding the complexity of semantic models can simplify user experiences, but they can also present fractured, contextless concept values. Taxonomies and ontologies, especially used as foundational sources for machine learning, need to be clear, transparent, and wholly visible and available…where appropriate. In this session, hear ideas on when and where to reveal semantic models and how much is enough to provide clear, transparent, and sensible semantic underpinnings.