Aarhus University Seal / Aarhus Universitets segl

Recognizing affiliation in colaughter and cospeech

Article published in Royal Society Open Science

2020.10.08 | Anne-Mette Pedersen


Theories of vocal signalling in humans typically only consider communication within the interactive group and ignore intergroup dynamics. Recent work has found that colaughter generated between pairs of people in conversation can afford accurate judgements of affiliation across widely disparate cultures, and the acoustic features that listeners use to make these judgements are linked to speaker arousal. But to what extent does colaughter inform third party listeners beyond other dynamic information between interlocutors such as overlapping talk? We presented listeners with short segments (1–3 s) of colaughter and simultaneous speech (i.e. cospeech) taken from natural conversations between established friends and newly acquainted strangers. Participants judged whether the pairs of interactants in the segments were friends or strangers. Colaughter afforded more accurate judgements of affiliation than did cospeech, despite cospeech being over twice in duration relative to colaughter on average. Sped-up versions of colaughter and cospeech (proxies of speaker arousal) did not improve accuracy for either identifying friends or strangers, but faster versions of both modes increased the likelihood of tokens being judged as being between friends. Overall, results are consistent with research showing that laughter is well suited to transmit rich information about social relationships to third party overhearers—a signal that works between, and not just within conversational groups.  



Gregory A. BryantChristine S. Wang and Riccardo Fusaroli (2020): Recognizing affiliation in colaughter and cospeech. Royal Society Open Science, Volume 7, Issue 10.



Riccardo Fusaroli, Associate Professor
School of Communication and Culture - Cognitive Science