Charles Hannon

Professor of Computing and Information Studies

Talk

Avoiding Bias in Voice User Interfaces

10:30 AM–11:20 AM

Humans today are giving speech to robots in two ways: with machine learning algorithms that use training data to understand and generate human language; and with direct human input, as interaction designers hard-code the responses of digital assistants (Alexa, Siri, Google Home) to human utterances. Both methods are vulnerable to the introduction of bias in robot speech. The training data crawled by learning algorithms are replete with human biases—especially gender and race biases—as several high-profile examples have shown. But everyday human language also carries a variety of biases in subtle forms, many of them well known to psychologists and sociolinguists. As interaction designers dive deeper into the world of Voice User Interfaces, they need to be aware of these biases in order to avoid replicating them in the language that we “teach” to these new devices. Ideally, and over time, to the extent that we can de-bias the speech of robots, the phenomenon of Linguistic Style Matching might cause human speech, over time, to become less biased as well.

Bio

Charles Hannon is a professor of computing and information studies at Washington & Jefferson College. He teaches courses in human-computer interaction, the history of information technology, information visualization, and project management. Charles also teaches in his college’s Gender and Women's Studies program. He writes about user experiences and interaction design, educational technology, pedagogy, and occasionally, William Faulkner. You can find his essays in interactions and Smashing Magazine and at his blog uxappeal.com.

Twitter