Aarhus University Seal


Round 1 Spring 2023

Can an ecological fear reaction reduce chronic low-grade inflammation?

  • Louise Bønnelykke-Behrndtz, Department of Clinical Medicine, Aarhus University
  • Seednumber: 26257
  • Collaborators:
    Marc Malmdorf Andersen, Interacting Minds Centre, Aarhus University
    Josephine Benckendorff, Department of Plastic- and breast surgery, Aarhus University Hospital
    Mathias Clasen, Department of English, Aarhus University


A fear reaction is fundamental for human survival and designed for escaping danger. This natural fear response is associated with healthy and transient activation of the immune system, thus providing effective mechanisms against potential trauma and pathogens. If activation of the immune system fails to resolve it results in continuous low-grade inflammation, which is present in around 10% of otherwise healthy individuals, and associated with the risk of several diseases.

In this study, we aim to investigate whether an ecological fear reaction can provide non-medical immune modulation and resolution of low-grade inflammation, including volunteers signing up for the Dystopia Haunted House event in 2023. Participants will have markers of fear and inflammatory levels estimated at baseline, on-site, and post-event, providing a better understanding of the dynamics and interaction between the adrenergic and immune systems.

A New Window into Infant Vocal Development: An Exploratory Ultrasound Study

  • Christopher Cox, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
  • Seednumber: 26258
  • Collaborators:
    Catherine Laing, Department of Language & Linguistic Science, University of York
    Margherita Belia, Department of Language & Linguistic Science, University of York
    Florence Oxley, Department of Education, University of York
    Tamar Keren-Portnoy, Department of Language & Linguistic Science, University of York
    Amelia Gully, Language & Linguistic Science, University of York
    Sam Cobb, Department of Archaeology & Hull York Medical School


The first year of infants’ lives is characterised by the emergence of stable patterns of articulatory activity (i.e., babble), which is a critical step in the early stages of language development. So far, our understanding of early infant vocal development has been primarily based on the study of auditory and acoustic signals (e.g., Vihman, Ferguson & Elbert, 1986; Oller et al. 2019). However, due to the high degree of individual variability in human vocal tract anatomy and articulation (Vorperian et al., 2009), auditory and acoustic analyses offer a limited understanding of how infants acquire the ability to produce speech sounds. Non-invasive imaging techniques, which can provide inside views of infants’ oral tracts during vocal production, can thus offer crucial insights into this complex feat of articulatory coordination.

The current project accordingly seeks to explore the following research questions: How effective are ultrasound methods in studying the growth and development of infants' anatomy and vocal abilities? How do anatomical and articulatory developments change the characteristics of infant vocalisations? What are the most rigorous testing procedures to use with infants in an ultrasound setup?

A scalable and explainable approach to discriminating between human and artificially-generated text

  • Roberta Rocca, Interacting Minds Centre, Aarhus University
  • Seednumber: 26259
  • Collaborators:
    Ross Deans Kristensen-McLachlan, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University
    Yuri Bizzoni, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University
    Rebekah Baglini, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University 


With natural language generation models becoming increasingly fluent, being able to discriminate between human and artificially-generated text has become an urgent societal problem. However, existing approaches are inaccurate, non scalable, and uninterpretable, which makes them practically unusable in real-world contexts (e.g., detection of AI-generated essays) which require precision and accountability. We propose a novel and scalable approach to training text discrimination models based on interpretable linguistic and cognitive features. Using prompts from standard NLP benchmarks for paraphrase, dialogue, and summarization, we generate parallel corpora of human- and machine-generated text, train interpretable classifiers on linguistic and cognitive descriptors, and combine insights from resulting models and experimental evidence to highlight overlaps and differences in computational and human heuristics for text discrimination. 

Automatic neural machine translation for Greenlandic

  • Ross Deans Kristensen-McLachlan, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University
  • Seednumber: 26260
  • Collaborators:
    Kenneth Christian Enevoldsen, Center for Humanities Computing, Aarhus University
    Johanne Sofie Krog Nedergård, Department of Linguistics, Cognitive Science and Cognition, Aarhus University


What is a word? This seems like a simple question but it continues to stump linguists and philosophers who spill millions of words trying to explain what they’re spilling. It seems, too, like it should be a concern for people who create language technology. How can teach computers to use words if we don’t even know what they are?

Contemporary natural language processing (NLP) makes assumptions about words which, by and large, are based on how major Indo-European languages behave. Those same linguists and philosophers might baulk but the engineer can reply with empirical results demonstrating the efficacy of their systems on goal-oriented language tasks. If it works, it works. But does it actually work?

This project tests these assumptions by applying modern language technology to a lesser-studied part of Denmark’s linguistic landscape – Greenlandic. This fascinating language of some 57,000 speakers exhibits many rare linguistic phenomena such as ergative alignment and polysynthetic morphology. Our goal is to train an automatic machine translation model for Greenlandic to Danish and back again. In doing so, we’ll empirically evaluate how well the assumptions of NLP hold up when applied to an extremely low-resource and morphologically complex language like Greenlandic.

Collaborating with Large Language Models: Prompting and the Future of Computational Thinking

  • Rebekah Baglini, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University
  • Seednumber: 26261
  • Collaborators:
    Arthur Hjorth, Department of Management, Aarhus University
    Ross Deans Kristensen-McLachlan, Department of Linguistics, Cognitive Science and Cognition, Aarhus University; Center for Humanities Computing, Aarhus University
    Mads Rosendahl Thomsen, Comparative Literature, Aarhus University
    Morten H. Christiansen, Department of Psychology, Cornell University

    Joseph Dumit, Department of Anthropology Sociocultural Wing, UC Davis


This project will investigate prompt engineering around large language models (LLMs) and the skills required to execute the prompt engineering process. The project will focus on designing a web interface, creating LLM challenges, and recruiting ML experts and novices as participants for observational and think-aloud protocol data collection. The data will be analyzed to identify prompt engineering process components, debugging approaches, perceived difficulties, and prior knowledge used to make sense of the process. The outcomes of the project will form the foundation for larger research instruments and grant applications. The interdisciplinary nature of the project draws on expertise from several different disciplines, including natural language processing, machine learning/engineering, learning sciences and education, and linguistics.

Discriminating eyes: Exploring selection biases in visual processing of resumes

  • Caroline Kjær Børsting, Department of Management, Aarhus University
  • Seednumber: 26262
  • Collaborators:
    Sonja Perkovic, Department of Management, Aarhus University
    Anders Ryom Villadsen, Department of Management, Aarhus University
    Dianna Amasino, Faculty of Economics and Business, University of Amsterdam


Discrimination in hiring can have detrimental consequences for underrepresented groups’ access to employment, but the mechanisms behind such discrimination are not well understood. Previous research has not been able to provide a clear account of how screeners visually process resume information, and whether this differs across candidate attributes and screener motivations. Research on selective attention suggests that people are skilled at navigating their visual environment and avoiding information that conflicts with their values or beliefs, if the information appearance is predictable. Building on this, we will investigate how screeners visually process resumes when resume appearance is predictable vs. unpredictable, and whether this leads to more diverse hires. We will test this in the lab using eye-tracking and by utilizing a more naturalistic resume construction. After assessing 100 fictious resumes for an entry-level clerk position, screeners will complete a questionnaire to assess 1) how their gaze patterns match current beliefs about what a good candidate is, and 2) whether they engage in injurious behavior without realizing it. The proposed experimental design allows for insigths into elicited implicit biases in screeners’ processing of resumes, and a comparison between screeners’ visual biases and their final assessment of candidates and self-awareness of their own biases. Understanding the underlying foundations of discrimination is expected to inform efforts to reduce discrimination in the hiring.

The taste of cooperation – Disentangling bottom-up versus top-down influences of shared food experience on social affiliation

  • Anna Zamm, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
  • Seednumber: 26263
  • Collaborators:
    Qian Janice Wang, Department of Food Science, University of Copenhagen


Sharing food is a culturally universal bonding experience. Emerging evidence suggests that eating the same food, or even sharing from the same plate, can promote trust and cooperation between strangers. However, the sensory and cognitive mechanisms by which food sharing facilitates social affiliation are still unknown. The present project aims to disentangle sensory (shared food experience) from cognitive (knowledge of sharing) contributions to social outcomes of food sharing. Two lab-based food-sharing studies will be conducted where, by manipulating what participants are told about the shared foods and what they actually eat, we can disassociate the cognitive knowledge of food-sharing from the sensory experience. Partners will subsequently complete a social coordination game that either requires cognitive cooperation (Study 1, economic game) or sensorimotor coordination (Study 2, synchronization of dyadic finger-tapping). Thus, the present project will elucidate how different pathways to social affiliation via food-sharing (sensory versus cognitive) impact coordination across distinct domains of social behavior.

Synergy and Synchronization in Dynamic Social Interaction: A Lindy Hop Partner Dancing Case Study

  • Peter Thestrup Waade, Interacting Minds Centre, Aarhus University
  • Seednumber: 26264
  • Collaborators:
    Julian Zubek, Faculty of Psychology, University of Warsaw
    Anna Zamm, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
    Olivia Foster Vander Elst, Department of Clinical Medicine, Aarhus University
    Cordula Vesper, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
    Kristian Tylén, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
    Rebekah Baglini, Department of Linguistics, Cognitive Science and Semiotics, Aarhus University
    Luke Ring, Cognitive Science, Aarhus University
    Fernando Rosas, Department of Brain Sciences, Imperial College London
    Ewa Nagorska, Faculty of Psychology, University of Warsaw


In experimental research on joint action and coordination, synchrony (i.e., similar relative temporal ordering of actions) is often used as an operationalization of coordination, which then can be related to measures of joint agency. This is appropriate in simple, goal-directed contexts where synchrony is the explicit goal. In this project, however, we demonstrate that research on joint action and dynamic social interaction can move beyond these experimental and measurement limitations, using improvised partner dancing (Lindy Hop) as a naturalistic and goal-free, physically measurable, activity.

In this study, we introduce synergy (i.e. the degree to which coupled systems form an emergent whole, with greater predictive information than is contained in the constituent parts) as an important quality of movement coordination, which can be operationalized with tools from information theory. We investigate the claim that dancing is a synergetic activity, and the relationship between synchrony, synergy, and a sense of joint agency. We also investigate how distributedness of leader-follower dynamics modulate these relationships. This provides an exciting new possibility for quantifying various aspects of social interaction.

Round 2 Summer 2023

Applause Culture in Symphonic Concert Audiences

  • Niels Christian Hansen, Interacting Minds Centre, Aarhus University
  • Seednumber: 26265
  • Collaborators:
    Alexander R. Jensenius, RITMO Center for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo
    Finn Upham, RITMO Center for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo


Applause is a mysterious, social phenomenon entailing spontaneous, ritualised expression of enthusiasm in response to exhibits of impressive skill in temporal arts such as theatre, acrobatics, and musical concerts. Although clapping behaviours arise universally in infants, the exact modes of expressing enthusiasm vary widely across cultures. For example, folk beliefs amongst classical music fans suggest that standing ovations arise especially frequently in North America, that synchronised clapping emerges more easily in Scandinavia, and that loud vocalisations are a more common part of Mediterranean audience behaviour. This survey study conducted on frequent attendees of symphony concerts in Denmark, Italy, and the United States provides the first-ever empirical test of anecdotal knowledge relating to applause culture in classical music. The outcomes of this seed project will contribute towards ensuring funding for a larger-scale research agenda with the goal of promoting data-based artistic and business-related decision-making within the creative and cultural sector.

The limits of AI semiotics: A pilot study probing generative AI image models’ understanding of causality and abstraction

  • Maja Bak Herrie, Department of Art History, Aesthetics & Culture and Museology, Aarhus University
  • Seednumber: 26266
  • Collaborators:
    Simon Aagaard Enni, The Statistics and Machine Learning Team, Danish Technological Institute


AI image generation models show great promise in simulating high quality imagery, yet they also tend to fail in subtle and strange ways: producing hands with 6 fingers or text that resembles no language. We believe, that there are foundational limitations to the way state-of-the-art AI image generation models respond to different types of visual signification, i.e., in their understanding of the relation between the prompt and what that prompt means. In order to probe these AI model limitations, we apply the semiotics of Charles S. Peirce, specifically his tripartite model of signs. In pilot study, we investigate how successfully different image generation models respond to prompts relying on each of the three different types of signification in Peirce’s model and trace the results back to the underlying techniques used to make the models. The results of our experiments, whether positive or negative, open up new questions of the depth or shallowness of AI image generation models’ understanding of signs.

Validating non-invasive conductivity estimation methods for their application in human electrophysiology

  • Tamas Minarik, Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University
  • Seednumber: 26267
  • Collaborators:
    Preben Kidmose, Department of Engineering, Aarhus University
    Carsten Wolters, Institute of Biomagnetism and Biosignalanalysis, University of Muenster
    Sune Jespersen, Department of Clinical Medicine, Aarhus University; Department of Physics and Astronomy, Aarhus University
    Menglin Chen, Department of Biological and Chemical Engineering, Aarhus University


A critical question in most non-invasive electrophysiological research is to find out which brain regions a certain signal originates from. This task is non-trivial and especially in the case of electroencephalography (EEG) it is particularly challenging as high-quality source estimation relies on accurate estimates of the electrical current conductivity in the brain and across the head. This effectively means the need for obtaining high-resolution conductivity maps of the entire brain volume, the skull and the scalp. Several non-invasive methods have been proposed over the years and some MRI-based methods are particularly promising. However, none of these methods has been validated with phantoms approximating anatomically realistic volumes and possessing precisely known conductivity values. Thus, currently we have no understanding of how accurate the produced conductivity maps really are. Hence, building an anatomically realistic conductivity phantom to establish the ground truth is an essential step enabling the field to move forward and to achieve high-quality source estimation with EEG – the most affordable and widely used method to record electrical brain activity non-invasively. The current research will take on the challenging task of building an anatomically reasonably accurate conductivity phantom and test the accuracy of two MRI-based conductivity estimation methods. Ultimately leading to significant improvement in the ability to determine the origin of the EEG signals; be it oscillations or ERPs.