Time(s)
Description
We use the word “conversational” to characterize virtual agent interfaces, but are we really having a conversation with the agents behind them? Not quite, but these interactions can feel enough like a conversation that our expectations are engaged in ways that they wouldn’t be if we were navigating a menu.
In this talk, I’ll cover a variety of challenges to assessing the effectiveness and usability of virtual agent systems. I’ll also discuss some of the cognitive processes underlying communication, and show you how you can use an understanding of the assumptions we make in conversation as a framework for assessing conversational systems.
Key takeaways from the session:
People will come away with considerations for assessing the usability of an intelligent agent or chatbot system that they can put into practice in their own projects. They will have an understanding of the cognitive processes underlying communication, which will help give them a framework for assessing conversational systems.
About the speakers
Sharon is a UX researcher and designer at Intel, where she consults with internal teams on taxonomy, ontology, and content. Her research work has focused on employee information-finding strategies, and she has worked with Intel IT on enterprise-scale information systems, such as enterprise search and intelligent virtual assistants.
Previously, she was the ontology modeling lead at a semantic web start-up.
Her favorite category is rdf:Property.