New Methodology for the Random Assignment of Apparent Gender and Expressivity in a Videoconference Conversation Paradigm
Steven M. Boker
University of Virginia
As we converse, facial expressions, head movements and vocal prosody are produced that are important cues to communication of affect. In order for this communication to be effective, these expressions must adapt to the ever changing context of who we are speaking with and how they are reacting. But it is unclear to what extent social expectations related to gender, race, and age are driven by appearance or by the dynamics of a person's facial expressions, head movements, and vocal inflections. It is difficult to effectively manipulate these variables during a conversation and studies of person perception and social expectation have either been observational or have been manipulated in an artificially scripted and acted sequence until now. This talk reports on a recently developed methodology that tracks a conversant in real time and redisplays a synthesized avatar that is accepted as being video by naive participants. The system can measure how individuals adapt in real time to precise manipulations of conversational context. For instance, the system can randomly assign the apparent gender and race of an interlocutor during a videoconference or manipulate aspects of a person's perceived facial and vocal expressivity.
2010 Program Committee
Tyler S. Lorig, Washington and Lee University (Chair); Nalini Ambady, Tufts University; Abigail Baird, Vassar College; Sian Beilock, University of Chicago; Daniel Klein, Stony Brook University, The State University of New York; Richard Lewis, Pomona College; Kris Preacher, University of Kansas; Deidra Schleicher, Purdue University; Timothy Strauman, Duke University; Tracy Zinn, James Madison University