Last updated: 2021.11.24
MCG Master projects | Internships
The Music Cognition Group (MCG) has several (unfunded) internships available each academic year. Virtually all projects are related to ongoing research supervised by PhD's and/or postdocs associated with MCG. You can find an overview of the current projects below. Feel free to contact the person listed in the project description directly. For general questions, feel free to e-mail the P.A. of MCG.
  1. Explorative design and evaluation of memory-based listening games

    MusicalityThis project is part of a series of pilot studies that will contribute to an interdisciplinary research agenda on musicality (Honing, 2018). The main aim is to develop engaging listening games that allow for, in potential, the hundreds of thousands of responses that are needed to be able to properly characterize musicality phenotypes, and their variability, in a variety of geographical regions with ready access to the internet.

    The main task is to explore and evaluate several memory game designs for probing music cognition, games that could be effective in probing the underlying phenotype (e.g, relative pitch, contour perception), its variability, as well as being intrinsically motivating (Burgoyne et al., 2013; Honing, 2021). An example is Memory or the Matching Pairs game. For this, several variants could be explored and evaluated, as well as applying the proper statistical methods to analyse the results. This project requires a creative student with an idiosyncratic mind. The project will lead either to a literature thesis or a Master thesis.

    • BSc in Computational musicology, Brain and Cognitive Sciences (and Society) and/or Psychometrics;
    • Familiar with R, Python or related software;
    • Creative mind;
    • Interest in music cognition.

    • Burgoyne, J. A., Bountouridis, D., Balen, J. van, & Honing, H. (2013). Hooked: A Game For Discovering What Makes Music Catchy. In A. De Souza Britto, F. Gouyon, & S. Dixon (Eds.), ISMIR (pp. 245–250). Curitiba, Brazil.
    • Harrison, P. M. C., Collins, T., & Müllensiefen, D. (2017). Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation. Scientific Reports, 7(1), 1–18.
    • Honing, H. (2018). The Origins of Musicality. Cambridge, Mass.: The MIT Press.
    • Honing, H. (2021). Lured into listening: Engaging games as an alternative to reward-based crowdsourcing in music research. Zeitschrift für Psychologie, 229(4). doi: 10.1027/2151-2604/a000474 [PsyArXiv]

    Contact: prof. dr H. Honing
    Starting date: Semester 2, 2021/22 or earlier.

  2. What are we actually listening to when we listen to music? Pitch, rhythm or something else?

    MusicalityThis project is about customizing and evaluating a set of advanced signal processing tools that allow for independently manipulating the pitch, temporal and/or spectral dimensions of an existing audio recording of a musical fragment. This to answer the question: What are we actually listening to if we listen to music? Pitch, rhythm or something else?

    We know that a familiar song is easy to recognize, even when the music is slowed-down (using a tempo transformation algorithm) or when all pitches scaled up an octave (using a pitch transformation algorithm). As such, tempo and pitch are perceptually invariant. We don’t mind (or even notice) when they change, it is still the same song. But what about a change in sound color (timbre), the rhythm (temporal structure) and other aspects of a musical sound? What are the most salient aspects of a musical signal that listeners attend to, remember or consider essential in recognizing a song?

    To study this, MCG plans a series of experiments (with humans and other species) in which familiar musical fragments are used that will be transformed in different dimensions. This to be able to identify in how far a particular musical dimension (e.g., pitch, rhythm, or timbre) is essential in recognizing a familiar song. State-of-the-art software, that allows for these transformations, needs to be customized and evaluated, and made accessible to a more general users group.

    The software will make use of spectro-temporal modulations (STM) (Elliott & Theunissen, 2009), a mathematical framework that unifies AM and FM filtering and that allows for separating timbre from rhythm (and vice versa). Furthermore, it will make use of noise vocoding (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005), an acoustic technique that removes pitch cues from an audio signal, but that preserves overall spectral contour.

    If time allows, a brief series of relatively straightforward categorization experiments (based on the work of Albouy, Benjamin, Morillon, & Zatorre, 2020) could provide insight in the effectiveness of the transformation method, and how best to incorporate these in future experiments on musicality.

    In summary, the student will work on a novel combination of methods to parametrically degrade a musical sound fragment, evaluate and demonstrate its usefulness. The resulting software will be used to generate stimuli for several future studies on musicality (Honing, 2018). The student will work independently in customizing existing Matlab code into an easy to use tool for the psychologists, musicologists and/or biologists (see [1] for further information on the research context). The project will lead to a Master thesis.

    • BSc in Computer Science, Music Technology or related field;
    • Experience with Matlab and signal provessing;
    • Interest in psychoacoustics and music cognition.

    Contact: prof. dr H. Honing
    Starting date: Semester 2, 2021/22 or earlier.

  3. Is pupil size a marker of beat perception ?

    Beat perception is the cognitive skill that allows us to hear a regular pulse in music to which we can then synchronize. Perceiving this regularity in music allows us to dance and make music together. As such, it can be considered a fundamental musical trait (Honing, 2012). Beat perception could be explained with Dynamic Attending Theory (DAT), which proposes that attention synchronizes to an external rhythm (Jones, 2018). In line with this theory, several EEG and modelling studies suggest that beat and meter perception arises from groups of neurons that resonate or oscillate at beat frequency (Lenc et al., 2021).

    Recently, it has been suggested that pupil size could be used as a marker of beat perception (Damsma et al., 2017; Fink et al., 2018). However, whether pupil size actually reflects oscillatory brain activity during rhythm perception is unclear. The aim of this Master project is to test this hypothesis by investigating whether beat-related frequencies are enhanced in the pupil signal, and by disentangling oscillatory pupil dilation from sound-evoked responses (Doelling et al., 2019).

    We are looking for an excellent and skilled Master student who will 1) analyze existing pilot pupil data; 2) set up a new eye tracking experiment and test participants; 3) analyze the results of this experiment. The project will lead to a Master thesis.

    • BSc in Psychology, AI, or related field<;/LI>
    • Experience with signal analysis in time and frequency domain (e.g., pupil dilation or EEG);
    • Skilled user of R and/or Matlab;
    • Interest in music and rhythm cognition.

    Contact: dr A. Damsma
    Starting date: Semester 2, 2021/22 or earlier.

  4. Revisiting rhythm space: modeling diversity in categorical rhythm perception

    Rhythm SpaceAim of this project is to extend and evaluate existing computational models of categorical rhythm perception using a range of recently obtained empirical data (including [1] and [3]). One potential model is described in [2] (for alternatives see [3,4]). Possible reseach questions are: a) Is it possible to learn – using machine learning techniques – the interaction function of a connectionist model from the different datasets (cf. Fig.4 in [2])? And b) can the result be interpreted in a way that is informative to rhythm cognition research and the often ignored role of enculturation? The project will lead to a Master thesis.

    • BSc in Computer, Cognitive or Computational Science, or related field
    • Expertise in Cognitive modelling, Machine Learning and/or Music Information Retrieval
    • Familiarity with Common Lisp or related programming language
    • Interest in music and rhythm cognition

    1. Desain & Honing (2003)
    2. Desain & Honing (1991)
    3. Jacoby & McDermott (2017)
    4. Honing & Bouwer (2019). Rhythm. In Foundations in Music Psychology: Theory and Research. Cambridge: MIT Press.

    Contact: prof. dr H. Honing
    Starting date: Put on hold.

  5. Is statistical learning influenced by isochronous presentation?

    Most studies in statistical learning use stimuli that are presented in an isochronous way, with the time intervals between events being constant. In rhythm perception, however, a distinction can be made between beat perception and statistical (or sequential) learning [1, 2, 3]. When the same sequence, with the same transitional probabilities is presented isochronously both beat perception and statistical learning can explain the results. However, if the same sequence is jittered, beat perception is disabled and only sequential learning can explain the results [3]. This literature thesis project will review the available empirical data for human and nonhuman animals on statically learning in auditory perception and discuss the impact that isochrony might have on the results. Alternatively, a computational model, simulating, evaluating and discussing the results in the recent literature, leading to a literature thesis or Master thesis.


    - BSc in Psychology, Computer Science or related field.
    - Familiarity with interpreting EEG, ERPs and EPs
    - Interest in music and rhythm cognition

    [1] Bouwer et al. (2016)
    [2] Honing et al. (2014)
    [3] Honing et al. (2018)
    [4] Attaheri et al. (2015)

    Contact: prof. dr H. Honing
    Starting date: Put on hold.

  6. Can beat perception and isochrony perception be disentangled in adults and newborns?

    To shed light on how humans can learn to understand music, we need to discover what the perceptual capabilities with which infants are born. Beat induction, the detection of a regular pulse in an auditory signal, is considered a fundamental human trait that, arguably, played a decisive role in the origin of music. Theorists continue to be divided on the issue whether this ability is innate or learned. Winkler et al. (2009) provided the first evidence in support of the view that beat perception is innate.

    More recently however, Bouwer et al. (2014) pointed out that the used paradigm needs additional controls to be certain that any effects (or the lack thereof) are due to beat perception, and not, for instance, a result of pattern matching, acoustic variability or sequential learning.

    To disentagle beat perception from isochrony perception, a novel oddball paradigm is currently being adapted for a pilot at the Institute of Cognitive Neuroscience and Psychology, Budapest (MTA) in order to 1) replicate the results of Winkler et al. (2009), 2) compare it to two recent studies with humans (Bouwer et al., 2016) and nonhuman animals (Honing et al., 2018), and 3) to disentagle beat perception from isochrony perception. To analyse and re-interpret these results (both published and in prep.) we look for an excellent and skilled master student with expertise in Matlab and EEG-analyses in both the time and frequency domain. The result will lead to a literature thesis or Master thesis.

    - Expertise in analysing EEG, ERP and/or MMN
    - Skilled user of Matlab and statistical software
    - Interest in music cognition and rhythm cognition

    [1] Winkler et al. (2009)
    [2] Bouwer et al. (2016)
    [3] Honing et al. (2018)

    Contact: prof. dr H. Honing
    Starting date: Winter 2019. [position filled]

  7. Computational probabilistic modeling of rhythm perception

    This project has a few different variants. They all involve IDyOM. IDyOM is a modeling framework designed for music prediction and based on multiple-viewpoint-systems, a class of music-tailored sequential prediction systems based on data compression methods. Recently, we have extended IDyOM to better predict the onset time of upcoming events by using metrical structure probabilistically inferred from the input.

    Requirements (for all variants): Keen interest in cognitive modeling. Knowledge of probability theory and programming experience is required. Common Lisp experience will be very helpful, otherwise eagerness to submerge yourself into a new programming language in a few weeks is required. Familiarity with music-theory might be helpful.

    Variant 1. The influence of patterns in pitch on the perception of meter
    Currently, our model infers meters using only patterns of onset times. However, melodic patterns are known to also exert a strong influence on the perception of meter. For example, repetitions of certain melodic patterns strongly induce meters with corresponding periodicities. Furthermore, downbeats tend to align with harmonically salient pitches. A similar method to what we used for inferring metrical structure could be used to infer key-signatures. The goal of this project is to further refine this idea, translate it into a concrete plan for modeling this using the multiple-viewpoint-systems framework, implement it, and analyze the results, possibly inferring cognitive predictions about the interaction of melody and meter perception

    Variant 2. Probabilistic key inference
    Using a similar approach to how we infer meters, it may be possible to infer kinds of hidden structure that underlies the musical surface. Key is one such structure. The goal of this project is to extend the IDyOM model to infer the most likely key-signature for a melody. The approach can likely be very similar to how our model infers meters.

    Variant 3. Improving music prediction performance
    Multiple-viewpoint systems use multiple abstracted representation simultaneously to predict upcoming events in a piece of music. For each representation used in the multiple viewpoint system, a data compression algorithm is used to predict upcoming symbols in that representation. Currently, the data-compression algorithm we use for predicting these symbols is prediction by partial match (PPM*) (which implements a variable-order hidden markov model). However, there may be other solutions that perform better at predicting strings of symbols derived from music. In particular, recurrent neural networks may be a good candidate, given their recent success in a variety of sequential prediction domains. The goal of this project is substitute the PPM* prediction mechanism with a potentially better prediction mechanism and systematically evaluate its prediction performance.
    N.B. Variant-specific requirements: In addition to the global requirements, machine learning expertise is required. Experience with recursive neural nets is preferable, but solid machine learning experience could also be sufficient.

    Contact: B. van der Weij, MSc
    Starting date: Spring 2017. [position filled]