Survivor Study

Wikis > DynamiCog Lab Wiki > Survivor Study

SURVIVOR STUDY

General Background and Objectives

The purpose of this study is to examine how people coordinate bodily and linguistic information during collaborative and competitive task-oriented interactions. The focus will be on multi-modal channels of behavior that include body posture, head nods, gesture, prosody, semantics, syntactic structure, and speech acts. By collecting a massive, data-intensive repository of such information, our objective is to examine how social partners continually adjust to and influence the behaviors/thoughts of each other. Furthermore, we also intend on using machine learning and data mining techniques to automatically classify a range of interaction goals, such as those involved in establishing rapport or negotiating with the intent to maximize one’s own success. By identifying relevant these patterns of behavior, we can begin answering questions about the purpose of behavioral coordination. This study is primarily exploratory with the explicit intent of allowing open-ended analysis.

In previous research, it has been shown that people will come to align bodily and linguistic behaviors during conversation. Such alignment suggests that people are highly attuned to the behaviors of another on a moment- by-moment basis. However, the purpose of such spontaneous coordination is unclear. Its relevance is takes on different meaning depending on the field and discipline, with social psychologists suggesting coordination is a means for establishing rapport (Baumeister & Leary, 1995), language researchers claiming that its function is for facilitating language processing (Bock, 1986), neuroscientists arguing that it is rooted in a mirror neuron system that codes for imitation (Gallese et al., 2004), and those taking a dynamical systems perspective arguing that coordination is evidence of synergistic motor control (Riley et al., 2011).

These many views stem from research that often only looks at only one channel of behavior at a time, rather than multiple channels simultaneously. It might be the case that particular actions serve unique functional purposes within an interaction, and may or may not be modulated by the goals of the task, the roles of the interlocutors, and the level of contact between the interlocutors. Further research needs to be conducted that systematically varies these interaction features, with particular emphasis on a “data-driven” approach (Yu et al., 2010). This approach utilizes technologies that capture fine-grained multimodal data recorded in real-time, as well as data mining techniques that extract meaningful patterns. The ultimate goal is to develop a new understanding of how people coordinate joint actions during conversation and other instances of language use.