Ethics as Information Architecture: Why AI Safety Requires IA Thinking
AI systems that treat ethics as an afterthought—adding content warnings or terms of service after building the core capabilities—create fragile protections easily circumvented by users or undermined by business pressures. This talk demonstrates why ethical AI development is fundamentally an information architecture problem.
Drawing from building Piper Morgan (a product management AI assistant with ethics-first architecture), I’ll show how ethical principles become structural constraints on information flow. When boundary enforcement happens before AI processing, when human decision-making authority is architecturally preserved, when systems learn from relationship metadata rather than content—you create AI that can’t be tricked into causing harm because the architecture itself prevents it.
This is applied IA at the systems level: using information structure to create clarity, trust, and human-centered AI. Information architects aren’t being made obsolete by AI—their expertise in organizing information spaces is essential for building AI systems that serve human flourishing rather than optimizing for engagement metrics that harm users.
Attendees will learn practical patterns for architecting ethics into AI systems and why IA thinking is desperately needed in responsible AI development.
