New Methodology for the Random Assignment of Apparent Gender and Expressivity in a Videoconference Conversation Paradigm
Sunday, May 24, 2009,
2:30 PM - 2:55 PM
Yerba Buena 3 - 4
As we converse, facial expressions, head movements and vocal prosody are produced that are important cues to communication of affect. In order for this communication to be effective, these expressions must adapt to the ever changing context of who we are speaking with and how they are reacting. But it is unclear to what extent social expectations related to gender, race, and age are driven by appearance or by the dynamics of a person's facial expressions, head movements, and vocal inflections. It is difficult to effectively manipulate these variables during a conversation and studies of person perception and social expectation have either been observational or have been manipulated in an artificially scripted and acted sequence until now. This talk reports on a recently developed methodology that tracks a conversant in real time and redisplays a synthesized avatar that is accepted as being video by naive participants. The system can measure how individuals adapt in real time to precise manipulations of conversational context. For instance, the system can randomly assign the apparent gender and race of an interlocutor during a videoconference or manipulate aspects of a person's perceived facial and vocal expressivity.