The Music Cognition Group (MCG) has several (unfunded) internships available each academic year. Virtually all projects are related to ongoing research supervised by PhD's and/or postdocs associated with MCG. You can find an overview of the current projects below. Feel free to contact the person listed in the project description directly. For general questions, feel free to e-mail the P.A. of MCG.
- Memory for familiar music: what and how do we remember?
Two broad approaches to studying musical memory are what we remember about music we heard and how we remember music. The most common method of investigating is what we remember about music, by asking listeners to generate song titles or to identify whether a song has been heard before. Another approach focuses on the organization of memory to understand how we remember music. Existing studies suggest that familiar music plays a crucial role in musical memory (Demorest et al., 2016; Andrea R. Halpern et al., 1998; Krumhansl, 2010), but how it is presented in memory is not fully understood.
A guiding principle of the current research is that musical memory retrieval involves in associations, namely musical relations (i.e., features of musical nature) and extramusical relations (e.g., category of music or episodic connections)(Halpern, 1984). Additionally, more recent research has demonstrated that culture has an effect on musical familiarity and on memory performance (Demorest et al., 2008; Patel & Demorest, 2013).
The aim of this project is to create a formalized memory model for familiar music. We are looking for an excellent and skilled Master student who will 1) analyse existing data; 2) formalize a basic recognition memory model for familiar music; 3) analyse the result of a listening experiment. The project will lead to a literature thesis or a Master thesis.
Requirements:
- BSc in Brain and Cognitive Sciences, Musicology or related field;
- Familiar with R;
- Open-mindedness;
- Interest in music cognition.
References:
- Demorest, S. M., Morrison, S. J., Beken, M. N., & Jungbluth, D. (2008). Lost in translation: An enculturation effect in music memory performance. Music Perception. https://doi.org/10.1525/mp.2008.25.3.213 doi: 10.1525/mp.2008.25.3.213
- Demorest, S. M., Morrison, S. J., Nguyen, V. Q., & Bodnar, E. N. (2016). The influence of contextual cues on cultural bias in music memory. Music Perception, 33(5), 590–600. https://doi.org/10.1525/MP.2016.33.5.590 doi: 10.1525/MP.2016.33.5.590
- Halpern, A R. (1984). The organization of memory for familiar songs. Journal of Experimental Psychology: Learning, Memory and Cognition, 10, 496–512.
- Halpern, Andrea R., Bartlett, J. C., & Dowling, W. J. (1998). Perception of mode, rhythm, and contour in unfamiliar melodies: Effects of age and experience. Music Perception. doi: 10.2307/40300862
- Krumhansl, C. L. (2010). Plink: ‘“Thin slices”’ of music. Music Perception, 27(5), 337–354.
- Patel, A. D., & Demorest, S. M. (2013). Comparative music cognition: Cross-species and cross-cultural studies. In D. Deutsch (Ed.), The Psychology of Music (pp. 647–681). Elsevier Academic Press. doi: 10.1016/B978-0-12-381460-9.00016-X
Contact: Xuan Huang
Starting date: Spring 2023.
- Evaluation of memory-based listening experiments
This project is part of a series of pilot studies that will contribute to an interdisciplinary research agenda on musicality (Honing, 2018). The main aim is to develop engaging listening games that allow for, in potential, the hundreds of thousands of responses that are needed to be able to properly characterize musicality phenotypes, and their variability, in a variety of geographical regions with ready access to the internet.
The main task is to explore, formalize and/or evaluate several memory game designs for probing music cognition, games that could be effective in probing the underlying phenotype (e.g, relative pitch, contour perception), its variability, as well as being intrinsically motivating (Burgoyne et al., 2013; Honing, 2021). An example is Memory or the Matching Pairs game. For this, several variants could be explored and evaluated, as well as applying the proper statistical methods to analyse the results.
The simplest version of the Matching Pairs game (MP1) is much like Memory, the card-game most children love to play. In this version, two melodies have to be judged as being the same, in a context of several competing alternatives. Note that this game variant has to make use of a ‘gold standard’ to be able to rate a response as correct (or not). As such, the game is only indirectly able to reveal the variability that we are interested in.
Hence, a variant of the game (MP2) will be ‘subjective’, without being forced to decide beforehand what is a correct response. This allows for exploring the underlying regularities that the ‘objective’ version has to ignore (e.g., to avoid the Western notion that two melodies are different because one of the pitches is different). In the ‘subjective’ version items can be similar in different aspects. As such, more than one pair might be correct. Instead of ‘correctness’, we will use alternative rating measures to be able to give feedback to the player, using, for example, internal consistency or some form of peer-judgement. The responses will reveal which aspects of the musical signal are more prominent, salient or easier to remember in the context of several alternatives.
The final variant of the game (MP3) will dynamically adapt to the level of the player, resulting in a robust and, in terms of duration of the experiment, efficient game useful for probing musicality on a large-scale. It will adjust the quality of the stimuli and the difficulty of the game, depending on how well the player performs. This variant will introduce some additional challenges to the design of the game, as well as the psychometric analyses of the results (e.g. Harrison et al, 2017).
In summary, the student will work, in collaboration with, and supervised by members of MCG, on an ambitious project aiming to tease apart the components of musicality (See [1] for further information on the research context). The project will lead to a Master thesis.
Requirements:
- BSc in Brain and Cognitive Sciences, Psychometrics or related field;
- Familiar with R, Python and/or statistics software;
- Interest in music cognition.
References:
- Burgoyne, J. A., Bountouridis, D., Balen, J. van, & Honing, H. (2013). Hooked: A Game For Discovering What Makes Music Catchy. In A. De Souza Britto, F. Gouyon, & S. Dixon (Eds.), ISMIR (pp. 245–250). Curitiba, Brazil.
- Eerola, T., Armitage, J., Lavan, N., & Knight, S. (2021). Online Data Collection in Auditory Perception and Cognition Research: Recruitment, Testing, Data Quality and Ethical Considerations. Auditory Perception & Cognition, 1–30. doi: 10.1080/25742442.2021.2007718
- Harrison, P. M. C., Collins, T., & Müllensiefen, D. (2017). Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation. Scientific Reports, 7(1), 1–18. doi: 10.1038/s41598-017-03586-z
- Honing, H. (2018). The Origins of Musicality. Cambridge, Mass.: The MIT Press.
- Honing, H. (2021). Lured into listening: Engaging games as an alternative to reward-based crowdsourcing in music research. Zeitschrift für Psychologie, 229(4). doi: 10.1027/2151-2604/a000474 [PsyArXiv]
Contact: prof. dr H. Honing
Starting date: Semester 1 or 2, 2022/23.
- What are we actually listening to when we listen to music? Pitch, rhythm or something else?
This project is about customizing and evaluating a set of advanced signal processing tools that allow for independently manipulating the pitch, temporal and/or spectral dimensions of an existing audio recording of a musical fragment. This to answer the question: What are we actually listening to if we listen to music? Pitch, rhythm or something else?
We know that a familiar song is easy to recognize, even when the music is slowed-down (using a tempo transformation algorithm) or when all pitches scaled up an octave (using a pitch transformation algorithm). As such, tempo and pitch are perceptually invariant. We don’t mind (or even notice) when they change, it is still the same song. But what about a change in sound color (timbre), the rhythm (temporal structure) and other aspects of a musical sound? What are the most salient aspects of a musical signal that listeners attend to, remember or consider essential in recognizing a song?
To study this, MCG plans a series of experiments (with humans and other species) in which familiar musical fragments are used that will be transformed in different dimensions. This to be able to identify in how far a particular musical dimension (e.g., pitch, rhythm, or timbre) is essential in recognizing a familiar song. State-of-the-art software, that allows for these transformations, needs to be customized and evaluated, and made accessible to a more general users group.
The software will make use of spectro-temporal modulations (STM) (Elliott & Theunissen, 2009), a mathematical framework that unifies AM and FM filtering and that allows for separating timbre from rhythm (and vice versa). Furthermore, it will make use of noise vocoding (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005), an acoustic technique that removes pitch cues from an audio signal, but that preserves overall spectral contour.
If time allows, a brief series of relatively straightforward categorization experiments (based on the work of Albouy, Benjamin, Morillon, & Zatorre, 2020) could provide insight in the effectiveness of the transformation method, and how best to incorporate these in future experiments on musicality.
In summary, the student will work, in collaboration with, and supervised by members of MCG on a novel combination of methods to parametrically degrade a musical sound fragment, evaluate and demonstrate its usefulness. The resulting software will be used to generate stimuli for several future studies on musicality (Honing, 2018). The student will work independently in customizing existing Matlab code into an easy to use tool for the psychologists, musicologists and/or biologists (see [1] for further information on the research context). The project will lead to a Master thesis.
Requirements:
- BSc in Computer Science, Music Technology or related field;
- Expertise with Matlab and/or audio signal processing;
- Interest in psychoacoustics and music cognition.
Contact: prof. dr H. Honing
Starting date: Semester 1 or 2, 2022/23.
- Modelling musical rhythm
In music, the rhythmic structure allows us to predict the timing of events, which optimises processing, and makes it possible for us to move in synchrony to the music. Several types of structure exist in rhythm, and how we base predictions on these different types of structure (like the rhythmic pattern, groups, and the beat) is unclear. In this project, we will use modelling and behavioural responses to understand how humans process different types of rhythms. Specifically, we will examine whether oscillator models and probabilistic models explain different aspects of rhythmic behaviour: entrainment to a beat, and predictions based on the rhythmic pattern respectively. Depending on the student’s interest and experience, they can take part in different sub-projects associated with the overarching questions. Some of this will be more model-oriented (for example: comparing how different oscillator models react to various rhythms) and some more lab-oriented (for example: collecting and analysing behavioural responses from people listening to various rhythms). It will also be possible to focus on your own research question (for example: are differences in rhythm perception dependent on musical expertise?).
Depending on skills and interest, the students may be involved in designing the experiments, modelling, collecting and analysing the data.
References:
- Bouwer, F. L., Nityananda, V., Rouse, A. A., & ten Cate, C. (2021). Rhythmic abilities in humans and non-human animals: A review and recommendations from a methodological perspective. Philosophical Transactions of the Royal Society B, 376, 20200335. https://doi.org/10.1098/rstb.2020.0335
- Bouwer, F. L., Honing, H., & Slagter, H. A. (2020). Beat-based and memory-based temporal expectations in rhythm: similar perceptual effects, different underlying mechanisms. Journal of Cognitive Neuroscience, 32(7), 1221–1241. https://doi.org/10.1162/jocn_a_01529
- Large, E. W., Herrera, J. A., & Velasco, M. J. (2015). Neural networks for beat perception in musical rhythm. Frontiers in Systems Neuroscience, 9(November), 159. https://doi.org/10.3389/fnsys.2015.00159
- Pearce, M. T. (2018). Statistical learning and probabilistic prediction in music cognition: Mechanisms of stylistic enculturation. Annals of the New York Academy of Sciences, 1423, 378–395. https://doi.org/10.1111/nyas.13654
Contact: dr Fleur L. Bouwer
Starting date: Spring/Summer 2023.
- Explorative design and evaluation of memory-based listening games
This project is part of a series of pilot studies that will contribute to an interdisciplinary research agenda on musicality (Honing, 2018). The main aim is to develop engaging listening games that allow for, in potential, the hundreds of thousands of responses that are needed to be able to properly characterize musicality phenotypes, and their variability, in a variety of geographical regions with ready access to the internet.
The main task is to explore and evaluate several memory game designs for probing music cognition, games that could be effective in probing the underlying phenotype (e.g, relative pitch, contour perception), its variability, as well as being intrinsically motivating (Burgoyne et al., 2013; Honing, 2021). An example is Memory or the Matching Pairs game. For this, several variants could be explored and evaluated, as well as applying the proper statistical methods to analyse the results. This project requires a creative student with an idiosyncratic mind. The project will lead either to a literature thesis or a Master thesis.
Requirements:
- BSc in Computational musicology, Brain and Cognitive Sciences (and Society) and/or Psychometrics;
- Familiar with R, Python or related software;
- Creative mind;
- Interest in music cognition.
References:
- Burgoyne, J. A., Bountouridis, D., Balen, J. van, & Honing, H. (2013). Hooked: A Game For Discovering What Makes Music Catchy. In A. De Souza Britto, F. Gouyon, & S. Dixon (Eds.), ISMIR (pp. 245–250). Curitiba, Brazil.
- Eerola, T., Armitage, J., Lavan, N., & Knight, S. (2021). Online Data Collection in Auditory Perception and Cognition Research: Recruitment, Testing, Data Quality and Ethical Considerations. Auditory Perception & Cognition, 1–30. doi: 10.1080/25742442.2021.2007718
- Harrison, P. M. C., Collins, T., & Müllensiefen, D. (2017). Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation. Scientific Reports, 7(1), 1–18.
- Honing, H. (2018). The Origins of Musicality. Cambridge, Mass.: The MIT Press.
- Honing, H. (2021). Lured into listening: Engaging games as an alternative to reward-based crowdsourcing in music research. Zeitschrift für Psychologie, 229(4). doi: 10.1027/2151-2604/a000474 [PsyArXiv]
Contact: prof. dr H. Honing
Starting date: Spring 2022. [position filled]
- Is pupil size a marker of beat perception ?
Beat perception is the cognitive skill that allows us to hear a regular pulse in music to which we can then synchronize. Perceiving this regularity in music allows us to dance and make music together. As such, it can be considered a fundamental musical trait (Honing, 2012). Beat perception could be explained with Dynamic Attending Theory (DAT), which proposes that attention synchronizes to an external rhythm (Jones, 2018). In line with this theory, several EEG and modelling studies suggest that beat and meter perception arises from groups of neurons that resonate or oscillate at beat frequency (Lenc et al., 2021).
Recently, it has been suggested that pupil size could be used as a marker of beat perception (Damsma et al., 2017; Fink et al., 2018). However, whether pupil size actually reflects oscillatory brain activity during rhythm perception is unclear. The aim of this Master project is to test this hypothesis by investigating whether beat-related frequencies are enhanced in the pupil signal, and by disentangling oscillatory pupil dilation from sound-evoked responses (Doelling et al., 2019).
We are looking for an excellent and skilled Master student who will 1) analyze existing pilot pupil data; 2) set up a new eye tracking experiment and test participants; 3) analyze the results of this experiment.
The project will lead to a Master thesis.
Requirements:
- BSc in Psychology, AI, or related field;
- Experience with signal analysis in time and frequency domain (e.g., pupil dilation or EEG);
- Skilled user of R and/or Matlab;
- Interest in music and rhythm cognition.
Contact: dr A. Damsma
Starting date: Semester 2, 2021/22. [position filled]
- Revisiting rhythm space: modeling diversity in categorical rhythm perception
Aim of this project is to extend and evaluate existing computational models of categorical rhythm perception using a range of recently obtained empirical data (including [1] and [3]). One potential model is described in [2] (for alternatives see [3,4]). Possible reseach questions are: a) Is it possible to learn – using machine learning techniques – the interaction function of a connectionist model from the different datasets (cf. Fig.4 in [2])? And b) can the result be interpreted in a way that is informative to rhythm cognition research and the often ignored role of enculturation?
The project will lead to a Master thesis.
Requirements:
- BSc in Computer, Cognitive or Computational Science, or related field
- Expertise in Cognitive modelling, Machine Learning and/or Music Information Retrieval
- Familiarity with Common Lisp or related programming language
- Interest in music and rhythm cognition
References:
- Desain & Honing (2003)
- Desain & Honing (1991)
- Jacoby & McDermott (2017)
- Honing & Bouwer (2019). Rhythm. In Foundations in Music Psychology: Theory and Research. Cambridge: MIT Press.
Contact: prof. dr H. Honing
Starting date: [Put on hold]
- Is statistical learning influenced by isochronous presentation?
Most studies in statistical learning use stimuli that are presented in an isochronous way, with the time intervals between events being constant. In rhythm perception, however, a distinction can be made between beat perception and statistical (or sequential) learning [1, 2, 3]. When the same sequence, with the same transitional probabilities is presented isochronously both beat perception and statistical learning can explain the results. However, if the same sequence is jittered, beat perception is disabled and only sequential learning can explain the results [3]. This literature thesis project will review the available empirical data for human and nonhuman animals on statically learning in auditory perception and discuss the impact that isochrony might have on the results. Alternatively, a computational model, simulating, evaluating and discussing the results in the recent literature, leading to a literature thesis or Master thesis.
Requirements:
- BSc in Psychology, Computer Science or related field.
- Familiarity with interpreting EEG, ERPs and EPs
- Interest in music and rhythm cognition
References:
[1] Bouwer et al. (2016)
[2] Honing et al. (2014)
[3] Honing et al. (2018)
[4] Attaheri et al. (2015)
Contact: prof. dr H. Honing
Starting date: [Put on hold]
- Can beat perception and isochrony perception be disentangled in adults and newborns?
To shed light on how humans can learn to understand music, we need to discover what the perceptual capabilities with which infants are born. Beat induction, the detection of a regular pulse in an auditory signal, is considered a fundamental human trait that, arguably, played a decisive role in the origin of music. Theorists continue to be divided on the issue whether this ability is innate or learned. Winkler et al. (2009) provided the first evidence in support of the view that beat perception is innate.
More recently however, Bouwer et al. (2014) pointed out that the used paradigm needs additional controls to be certain that any effects (or the lack thereof) are due to beat perception, and not, for instance, a result of pattern matching, acoustic variability or sequential learning.
To disentagle beat perception from isochrony perception, a novel oddball paradigm is currently being adapted for a pilot at the Institute of Cognitive Neuroscience and Psychology, Budapest (MTA) in order to 1) replicate the results of Winkler et al. (2009), 2) compare it to two recent studies with humans (Bouwer et al., 2016) and nonhuman animals (Honing et al., 2018), and 3) to disentagle beat perception from isochrony perception. To analyse and re-interpret these results (both published and in prep.) we look for an excellent and skilled master student with expertise in Matlab and EEG-analyses in both the time and frequency domain. The result will lead to a literature thesis or Master thesis.
Requirements:
- Expertise in analysing EEG, ERP and/or MMN
- Skilled user of Matlab and statistical software
- Interest in music cognition and rhythm cognition
References:
[1] Winkler et al. (2009)
[2] Bouwer et al. (2016)
[3] Honing et al. (2018)
Contact: prof. dr H. Honing
Starting date: Winter 2019. [position filled]
- Computational probabilistic modeling of rhythm perception
This project has a few different variants. They all involve IDyOM. IDyOM is a modeling framework designed for music prediction and based on multiple-viewpoint-systems, a class of music-tailored sequential prediction systems based on data compression methods. Recently, we have extended IDyOM to better predict the onset time of upcoming events by using metrical structure probabilistically inferred from the input.
Requirements (for all variants):
Keen interest in cognitive modeling. Knowledge of probability theory and programming experience is required. Common Lisp experience will be very helpful, otherwise eagerness to submerge yourself into a new programming language in a few weeks is required. Familiarity with music-theory might be helpful.
Variant 1. The influence of patterns in pitch on the perception of meter
Currently, our model infers meters using only patterns of onset times. However, melodic patterns are known to also exert a strong influence on the perception of meter. For example, repetitions of certain melodic patterns strongly induce meters with corresponding periodicities. Furthermore, downbeats tend to align with harmonically salient pitches. A similar method to what we used for inferring metrical structure could be used to infer key-signatures. The goal of this project is to further refine this idea, translate it into a concrete plan for modeling this using the multiple-viewpoint-systems framework, implement it, and analyze the results, possibly inferring cognitive predictions about the interaction of melody and meter perception
Variant 2. Probabilistic key inference
Using a similar approach to how we infer meters, it may be possible to infer kinds of hidden structure that underlies the musical surface. Key is one such structure. The goal of this project is to extend the IDyOM model to infer the most likely key-signature for a melody. The approach can likely be very similar to how our model infers meters.
Variant 3. Improving music prediction performance
Multiple-viewpoint systems use multiple abstracted representation simultaneously to predict upcoming events in a piece of music. For each representation used in the multiple viewpoint system, a data compression algorithm is used to predict upcoming symbols in that representation. Currently, the data-compression algorithm we use for predicting these symbols is prediction by partial match (PPM*) (which implements a variable-order hidden markov model). However, there may be other solutions that perform better at predicting strings of symbols derived from music. In particular, recurrent neural networks may be a good candidate, given their recent success in a variety of sequential prediction domains. The goal of this project is substitute the PPM* prediction mechanism with a potentially better prediction mechanism and systematically evaluate its prediction performance. N.B. Variant-specific requirements: In addition to the global requirements, machine learning expertise is required. Experience with recursive neural nets is preferable, but solid machine learning experience could also be sufficient.
Contact: B. van der Weij, MSc
Starting date: Spring 2017. [position filled]
|