Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Inhalt:

Project ideas

For students of music psychology/cognition and systematic musicology

Projektideen für Abschlussarbeiten im Fach Systematische Musikwissenschaft

The following project ideas are intended for students who are:

  • located anywhere, whether in Graz ("Musikologie" and related programs) or international
  • at any level and looking for a topic for their bachelor's or master's thesis or doctoral dissertation

If you are interested in research on a particular topic, you can search for it in this file. If you can't find it, search for similes or related words. If you still can't find it, don't hesitate to ask me.

According to the theory of effective altruism, our efforts should be focused on issues that are big, solvable, and neglected. Actually, that idea applies to any research. If you are considering a career as a researcher, these are important criteria that guide your choice of topic. Others are personal interest and motivation, and the match between your skills/background and the skills/background required to successfully address the topic. "Big" means the implications of making progress are big, e.g. many people could benefit. "Solvable" means you have an intuition that it could be easier than other researchers seem to think to make significant progress toward understanding the issues. "Neglected" means there is surprisingly little research on the question yet, perhaps because no one thought of it, or few people have the right interdisciplinary background.

A word of warning. The following ideas are not being revised regularly. Some may be out of date while others will continue to be current for a long time. Some may be superceded by recent publications. Many are preliminary or have not been carefully thought through. I am not offering any guarantees that a project based on one of the listed ideas will be fruitful. That will depend on many factors. The aim of this list, then, is to give students ideas, get you thinking, and help you to come up ideas of your own. Pick an idea that looks interesting, and we can talk about the possibilities. Suggestions to improve the context are always welcome.

If you publish an idea from this page, please cite it as follows (APA format): Parncutt, R. (year, month day). Research project ideas for students of systematic musicology and music psychology. Retrieved from uni-graz.at.

Contents

To find the text on a topic from the following list, please search for the topic within this document.

Music psychology and sociology

Music and emotion

Premusical development

Music and non-musical skills

Can music alleviate depression?

 

Soundscape studies

Mood regulation and the origins of music

 

Music and minorities in Graz

Affektenlehre and the psychology of musical emotion

Individual differences in motherese and personality

Music and spirituality

Music and love

Motherese in children's story CDs

Self-efficacy and mood regulation through music

Timbre and emotional implications of musical chords

Emotional communication between mother and fetus

Mood regulation through music: musicians versus non-musicians

Emotional properties of everyday sounds

Recognition of emotional states from internal body sounds

Strong experiences of music by the deaf

Effect of analytical or intellectual engagement on enjoyment of music

Differences between motherese and regular speech

Music selection and learned helplessness

Peak experiences in different art forms

 

What is a musical style?

Emotion in Western classical music

 

Origins of musical aural skills (OMAS)

The musical emotions nostalgia and sentimentality

 

Memory for pop songs

Altered states, trance, ecstasy, flow

 

Why do people download music from the internet?

Emotional connotations of major and minor

 

Psychology of audio branding

Emotional qualities of musical chords

 

Performance, rhythm, expression

Psychoacoustics

Music theory

Pedagogical application of fingering models

Forward motion in chord progressions

Individual differences in the perception of chord roots

Performance teachers' aims, issues and values

Perception of musical roughness and dissonance

The historical development of tonal-harmonic syntax and the origins of tonality

Immanent versus performed accents

Tonalness, consonance, and prevalence of pc-sets

The tonic as home

Expression in Baroque harpsichord music

Empirical determination and tracking of musical key using probe triads

Psychological reality of published music analyses

Structured listening to recover a performer's conception of musical structure

A psychoacoustically based, computer assisted music theory

Psychological testing of alternative music notations

Rhythm, walking, heartbeat

Ecological theory of electroacoustic music

Perceptual bases of Schenkerian theory

Spontaneous dance movements

Melodies of residue tones

Perceptual basis of Riemann's functional harmony

Tempo and tonality

Pitch shifts of complex tones

Chord-scale compatibility in jazz

Mathematical and musical skills

Perception of mistuning

Similarity of pc-sets

 

Subjective duration of a passage of music

Fifth relationships between chord roots

 

Rhythmic hysteresis

Structure of pop hits

 

 

Prevalence of jazz chord symbols

German introduction

Liebe Studierende,

In den letzten Jahrzehnten ist das Fach Musikpsychologie stark gewachsen. Die möglichen interessanten Forschungsfragen sind zahlreich. Im Vergleich zu dieser großen Vielfalt an Möglichkeiten stellen folgende Vorschläge eine kleine Auswahl dar. Wenn Sie Interesse an einer der folgenden Ideen haben, lade ich Sie ein, das Thema mit mir zu besprechen und ggf. anhand der neuesten Literatur zu ändern.

In der Regel finden Studierende im Konversatorium Systematische Musikwissenschaft ein eigenes Forschungsthema für ihre Bachelorarbeit, Masterarbeit oder Dissertation. Der vorliegende Text soll Ihnen helfen, ein Thema zu finden, das Ihren Interessen, Erfahrungen und Fähigkeiten entspricht.

Abschlussarbeiten können auf Deutsch oder Englisch verfasst werden. Ich empfehle englischsprachige Abschlussarbeiten, weil sie im Internet global rezipiert und zitiert werden können - auch im deutschsprachigen Raum. Selbstverständlich biete ich Ihnen zweisprachige Unterstützung an. Wer eine wissenschaftliche Karriere plant oder die Möglichkeit einer wissenschaftlichen Karriere nicht ausschließen will, schreibt auf Englisch. Folgende Texte sind auf Englisch formuliert, um die englischsprachige Internetsuche zu ermöglichen und ggf. ausländische Studierende anzulocken.

List of project ideas

Can music alleviate depression?

The number of people taking antidepressants is steadily increasing. Should patients play a more active role in treating their own depression? Most depression is ultimately caused by a person's environment and situation. Perhaps the best long-term treatment is to change that situation so that antidepressants become unnecessary. A possible way to do this is with music, but from a clinical viewpoint the effectiveness of music as a treatment for depression is unclear. In this study, we will work together with a neurologist, psychologist, or therapist who can prescribe antidepressants, and regularly does so (or perhaps several such professionals). We will seek the collaboration of relevant researchers in Graz, e.g. Institut für Psychologie, Klinische Psychologie; Med-Uni Graz, Psychiatrie; Musiktherapie an der KUG. The neurologist, psychologist, or therapist would be motivated to participate by the prospect of being co-author of a future publication. S/he (or they) would identify a group of patients with mild depression who are otherwise healthy, such that it is unclear whether they should take antidepressants or not. It would be therefore ethically acceptable to randomly assign them to different experimental groups. One group would get a prescription of antidepressants. Another would be "prescribed" a special regime of music listening and be told that recent research suggests music listening is more effective than pills in their case. The neurologist, psychologist or therapist would agree to select the participant pool but to have no influence at all over the assignment of the patients to groups, which must happen entirely randomly under the control of the experimenter. Perhaps the patients should not even be informed (or allowed to guess) that they are taking part in a scientific study, because any information about the aims of the study could distort the result (whether that is ok is an interesting ethical question). Participants should believe that the prescribed music or tablets constitute the best available treatment for their case. A control group will receive no treatment at all, and an explanation that no treatment is the best in their case (apart from perhaps behavioral advice that it similar for all participants). Participants in the music group will be asked about their favorite music in a short questionnaire and later given a datastick with a large amount of such music on it (Would it be legal  to convert youtube files into mp3 and would the sound quality be good enough?). They would also get brief instructions on how and when to listen to the music (probably with a mobile phone and headphones, assuming everyone can easily do that). The pill group and the control group would also fill out a questionnaire, of similar length and character on a relevant topic. The pill and control groups may be further divided into two: those who listen to stories and plays in an equivalent fashion to those who listen to music, and those that do nothing special. Those listening to stories or plays would be asked for their preferences in a manner analogous to the music group, and given mp3 files in the same way. All participants would keep a daily diary about how they feel and why, and participate in two interviews, one before the project begins and another after it has been running for 1-2 months, so we would collect and analyse both quantitative data.

Soundscape studies

Music psychologists have long talked about the role of everyday sounds in music perception, but good qualitative studies in music psychological journals of the perception and meaning of everyday sounds are still lacking. Students in this study will connect together two fields that have traditionally worked separately: soundscape studies and music psychology. There is a need to understand everyday sounds and music with an ecological approach using qualitative research methods. A synergy could bring new insight into (i) how everyday sounds affect and control our everyday experience and behavior, and (ii) the content, function, and meaning of music. Here are some interesting links: International Ambiences Network, World Forum for Acoustic Ecology, interdisciplinary approaches to soundscape studies. This project could be co-supervised by Institut für Geographie und Raumforschung und the Centre for Systematic Musicology, further information: Prof. Dr.Justin Winkler.

Individual differences in the perception of chord roots

In Parncutt (1993), I tested and improved a model of pitch salience in musical chords (Parncutt, 1988; Terhardt, 1976). Although the data were consistent with the model, there is a need for a more comprehensive investigation (more chords, more listeners; comparison of results for chords of octave-complex tones with results for chords of harmonic-complex and pure tones). New data could lead to improvement of the model and development of new music-theoretical applications. We could for example do an internet survey whose software could also be used to test local listeners.

It would be especially interesting to compare results for "fundamental listeners" (GrundtonhörerInnen) and "overtone listeners" (ObertonhörerInnen). Seither-Preisler et al. (2007) and Schneider et al. (2005) found that when listeners were presented with harmonic complex tones with missing fundamentals, some consistently heard a pitch corresponding to the missing fundamental, while others consistently heard a pitch corresponding to a clearly audible partial. Musicians were more likely to fall into the first group ("fundamental listeners"). Only they are expected to hear chord roots in the way that I predicted in Parncutt (1988). This hypothesis has never been carefully tested. A possible co-supervisor for this project is Dr Annemarie Seither-Preisler, Institut für Psychologie der KFU; we are lucky that an international expert on this phenomenon is living and working right here in Graz.

Music and non-musical skills

Does music enhance non-musical skills? This is an important question in music psychology, and there is still considerable uncertainty about it. A journalist recently argued convincingly for a strong connection between music and success in other areas (Joanne Lipman, New York Times, 12.10.2013) but there is also a good counterargument that the primary value of music lies in cultural participation and quality of life (link).  It would be interesting to follow up her approach up in a systematic study of the musical skills of successful people. If successful people are often practising musicians, as Lipman suggests, then there might be some kind of relationship between their musical and non-musical skills. Conversely, according to this theory, graduates of music academies should more more successful outside of their field than graduates in other disciplines. But this thesis may only apply to people whose main field of endeavour is not music. Thus, there are many possible ways to approach this question, but no clear and easy one. Here is an idea. Make a list of common professions in which it is possible to be very "successful" (law, manufacturing, politics and so on). Find people in those professions and ask them who are the most "successful" representatives of those professions in Graz. In this way, make a long list of successful people for which there is considerable agreement. Now contact those successful people with a simple question: Do or did you perform music, and if so on what instrument and at what level? Of course it is hard to contact successful people, because they are busy and popular. So we will talk about strategies for achieving that, as well as how to avoid data biases that might result from some people being easier to contact than others. If enough people respond, it may be possible to a statistical analysis. Otherwise, we could do a qualitative study to try to understand how music contributes to success in other areas. A further possibility is to analyse data on the future careers of university graduates, if good data of that kind is available. Finally, in all of these cases we will have to consider the effect of socio-economic status, which may be the ultimate cause of both "success" and musical ability. For this reason, a clear quantitative result is unlikely; but a good qualitative study could be revealing.

Origins of musical aural skills (OMAS)

The OMAS internet survey explored what environmental and social factors support the development of very good aural skills (Parncutt et al., 2006 c). Since we cannot rely on claims of internet participants’ about their aural skills, we will test these skills directly over the internet and focus on the data of participants with the best skills. This involves writing a test of aural abilities - specifically the ability to recognize musical intervals and chords when presented melodically or harmonically. The software has already been written as part of a Diplomarbeit (contact: Philip Weber, Institut für Angewandte Informationsverarbeitung und Kommunikationstechnologie, TU Graz - a co-author of any future publication). The project may also involve analysis of other OMAS data on the origins of musical aural skills.

Many musicians claim that they can transcribe entire scores in real time in their minds as they listen. This feat has not yet been the subject of systematic empirical study. It would presumably be possible to find 5 or 10 such musicians in Graz and put their hearing to a stringent test. They would participate voluntarily because of their interest in demonstrating and understanding their ability. The data might help us to understand which aspects of musical structures are most difficult to perceive. Theories of the perception of pitch-time structures (masking, salience, Gestalt, auditory scene analysis..) might be able to explain the data. Cognitive music psychologists interested in musical expertise (e.g. Palmer, Sloboda) have discovered interesting things about how musicians perceive structure by studying the errors made by expert musicians, but no-one has ever done what I am suggesting. We would encourage the participants to participate in OMAS and ask what childhood experiences may underlie their exceptional abilities.

The musical skills of mathematicians

It is sometimes claimed that mathematicians and physicists have superior musical skills or perform music more often than other people or perhaps than other academics. The first question to ask is whether that is really true. To check that we need some reliable statistics including comparisons with various control groups. If it is true, the next question is why. Is the way we think about musical structures similar to the way we think about mathematics? Are both related to what psychologists call spatial skills and reasoning? Or is the connection the result of an external cause, e.g. some social or personality variable that mathematicians and musicians have in common?

Memory for pop songs

How big is our memory for music? One way to address this question is to ask how many songs the average person knows. 100? 1000? 10000? The answer to this question depends on your definition of "knowing". An experimental participant could be said to "know" a song if s/he can sing back most of the chorus and part of the verses (melody and lyrics) after hearing a few words/tones from the start or the chorus. Recognition shiould not be affected by musically typical changes to key, tempo, timbre or harmony. To make progress on this problem we need to explore the musical memory of individuals (case studies). That is hard to do because you have to guess what is in memory - there is no way to explore the contents of memory systematically. A possible approach is to use youself as an experimental subject. I have found the texts of about 700 songs that I "know" (you can read them by entering my homepage address and adding /ALLSONGS.doc; I have not linked this file to the internet because there might be a copyright problem). I guess if I sat at my computer for a few weeks I could expand this list to 2000 songs. The student doing this study could do exactly that and perhaps work with another student to get comparative data. We could then do some analysis of the content. The advantage of choosing this project is not only that it is interesting and ground-breaking but you will also end up with a very interesting documentation of your own musical memory.

Structure of pop hits

What makes a pop song successful? Knowing the answer to this question would be like knowing how to predict future changes in currency exchange rates: you could make a lot of money. But it is unlikely that research will uncover universal principles for writing successful pop songs. Fashions and contexts change, so the success of song depends not only on the music itself (the structure of the melody, the timbre of the voice and so on) but also on the time and place. It should nevertheless be possible to retrospectively make generalizations about successful songs in a given place (e.g. Western international pop culture) and period (e.g. the 1960s). This project would involve first listing the top 50-100 pop from several periods (e.g. 1960s, 1970, 1980s, 1990s) on the basis of existing charts, transcribing the melody of each song (or finding reliable transcriptions), and calculating a range of quantitative parameters for each melody such as range in semitones, number of scale steps used, the same number weighted by frequency or use or duration, degree of chromaticism in melody, ditto in accompaniment, the range of note durations, the number of different note durations, the average number of notes per second, the range of this value, degree of syncopation, total duration of verses compared to choruses, degree of repetition (perhaps including different hierarchical levels), and so on. A similar study that is being carried out by Reinhard Kopiez and Daniel Müllensiefen (presented at DGM-Tagung, Würzburg 2010) uses sophisticated computing tools developed in music information retrieval; the present study is intended as a low-tech replication (the said parameters can be calculated by hand) that strives for a balance between subjective and objective approaches.

Performance teachers' aims, issues and values

Recent years have seen a steady stream of new edited books that present the results of music performance research. Parncutt and McPherson (2002) covered general aspects of music psychology, the psychology of specific musical skills and the acoustics of different musical instruments. Rink (1995, 2002) combined relevant humanities scholarship (including music history and analysis) and scientific research (e.g. in psychology of memory) with the academically grounded views and experience of excellent performers and teachers. Williamon (2004) surveyed a wide range of physical and psychological techniques that can help music students achieve excellence. Odam and Bannan (2005) addressed such topics as creativity, musical communication, improvisation, physiology of performance, and questions of artistic and ethnic interculturality. McPherson's (2006) book is oriented mainly towards children, but also includes a wealth of information that is relevant for music academies. Altenmüller, Wiesendanger and Kesselring (2006) focus on the physiological basis of the virtuoso technique and demonstrate convincingly that modern brain research can be directly relevant for musicians. Lehmann, Sloboda and Woody (2006) take as their focus psychological research that is relevant for musicians of all kinds, and help musicians without scientific or research training to interpret and apply the results of scientific research in music performance.

For this body of research to be useful for performers, it has to be made relevant and accessible for performance teachers and their students, who may be rightly suspicious of researchers who claim to know more than they do about their own craft. It is not enough simply to publish good research - it is also necessary to establish a constructive working relationship. One way to achieve this is to explore the important issues from the point of view of performers - both teachers and students - and then to investigate how existing research might contribute constructively and realistically to those specific issues.

This project might involve one or more of the following points:

  • Interviews with performance teachers (music profs) and their students about how performance teaching and performance research could most constructively interact, including discussion of specific recurrent issues and problems that they consider appropriate topics for research and what kind of support they would welcome (or not welcome) from performance researchers. The interviews would be transcribed and analysed using standard qualitative methods.
  • Identification of existing research that addresses the issues raised by the performers coupled with practical suggestions on how that research could be presented to students.
  • Detailed suggestions for future performance research projects that address other issues raised by the performers.
  • A further series of interviews with performance teachers and students following a presentation of the main aims and content of specific possible research-based courses for music students on topics such as efficient practice, improvisation, sight-reading, memory, intonation, expression, conducting, performance anxiety, music medicine, and the physics, physiology and psychology of performance on specific families of instruments (cf. above list of books). The interviews would address such issues as:
    • the extent to which music students should be exposed to such materials
    • which specific aspects of the proposed courses the teachers consider important or promising
    • what personal, political, organisational, legal or bureaucratic hurdles might be encountered when trying to introduce those courses in a specific institution
    • what strategies could be adopted to jump those hurdles and enable such changes

An advantage of this topic is that it is strongly career oriented: a student who chooses this topic may one day be offered a position at a music academy.

Perception of musical roughness and dissonance

No currently available model of musical roughness reliably predicts empirical data on the perceived dissonance of musical chords (Parncutt, 2006 a). Recent studies have cast doubt on the role of roughness in dissonance perception (e.g. McLoughlin). But it is clear that a musical cluster is somehow rough. To develop a useful model of roughness for music-theoretical use, published data should be systematically compared with predictions of various models. A good model may need to account for

  • modulation of the amplitude envelope
  • masking and audibility of individual pure-tone components
  • effects of absolute frequency (the dominance region of spectral pitch)
  • non-linear addition of contributions from different critical bands or beating components

The following simple experiment could isolate the last point from the others: How rough are chords of octave-complex (Shepard) tones representing different pitch-class sets? It might be interesting to try to predict the perceived dissonance of all possible pitch-class sets (including inversions) of three tones or even four tones as a linear combination of different predictors including roughness and pitch ambiguity (the opposite of Stumpf's fusion, see Parncutt 1988). Statistically, this can be done by multiple regression between the predictors and the empirical data. But note that pitch ambiguity is not a very clear concept either. For example, the rock hit "Sweet Home Alabama" sounds  consonant although its tonic (key) is ambiguous (Temperley, 2000).

Tonalness, consonance, and prevalence of pitch-class sets

Pitch-class sets (pc-sets) are sets of tones selected from the 12 steps of chromatic scale (called "pitch classes" in music theory and "chroma" in music psychology). When each scale degree is labeled from 0 (C) to 11 (B), a C major triad becomes  047. Pc-sets can be generalized to be invariant under the mathematical operations transposition and inversion. In everyday language: 047 (C major) and 158 (Db major) both belong to the category "major triad", which can be expressed relative to the root as 047. A further level of abstraction is to consider all intervals within a set of tones. In the case of the major triad there are three intervals: 3, 4 and 5 semitones (intervals of 1, 2 and 6 semitones are absent). This set of intervals characterizes both the major and minor triads and is the basis for the corresponding "pc-set".

Pc-sets  differ markedly in their tonalness. For example, the minor triad (in semitones above the root: 037) is highly tonal, while 012 is quite atonal. Pc-sets also vary in roughness: 021 is very rough and 037 is (relatively) very smooth. But there is no simple relationship between tonalness and roughness: some sets such as the diminished triad 036 are relatively atonal (in the sense of root ambiguity) but relatively consonant (in the sense of absence of dissonance). The situation is further complicated by the fact that our perception of pc-sets depends on our familiarity with them in tonal contexts. To begin to analyse this complexity, it would be necessary to first compare predicted tone profiles of pc-sets based on Krumhansl & Kessler (1982) and Parncutt (1982) with predictions of their roughness based on the average roughness of the six interval classes (cf. Huron, 1994) and the interval vector of each pc-set, which shows how often each interval class occurs in the set. The result would be tables of pc-sets with information about their tonal properties, which could be used by composers and music analysts. This is a good project for someone with basic programming skills. Another possibility is to ask to what extent composers of "atonal" music systematically avoid tonal references by consistently using the least tonal pc-sets. To answer this question, one count how often specific pc-sets occur in analyses of atonal music. The results of such as study might of course be affected by biases of the analysts, either toward or away from relatively tonal sets.

Perception of mistuning in real musical contexts

The psychological literature on the perception of mistuning suggests that listeners are remarkably tolerant of mistuning in music performances. That in turn suggests that theories of tuning based on frequency ratios are irrelevant, because in real musical contexts listeners cannot distinguish between Pythagorean and Just variants (but they may be able to distinguish relatively large from relatively small tunings of intervals). That idea has important music-psychological and music-theoretical ramifications. But is it really true? To my knowledge no-one has ever directly measured tolerance to mistuning in real musical contexts. That is remarkable considering the large number of published experiments on the intonation of isolated intervals and musical passages, and on the perception of tuning in isolated intervals. In this experiment, participants hear musical passages in which selected tones are mistuned, and are asked to identify passages in which there is mistuning or to rate the degree of mistuning. One possibility is to generate passages artificially (e.g. in Sibelius) so that all tones are initially tuned to 12-tone equal temperament and mistunings are measured relative to that standard. It would be even better to start with slightly stretched tuning in which octaves are slightly bigger than 2:1 and other intervals are stretched accordingly. It is also possible to add controlled expression using software such as Director Musices. Another possibility is to begin with regular commercial music recordings whose intonation is very good according to expert musician listeners, mistune isolated tones using DSP software, and repeat the same design.

Similarity of pc-sets

The perceived similarity of pc-sets is a central issue for composers versed in pc-set theory. For example, if a composer is aiming for coherence, s/he may work with pc-sets that sound similar to each other. Pc-set similarity is important in music analysis for similar reasons.

Several music theorists have developed mathematical models of the similarity of pc-sets, and the models have been tested empirically by researchers such as Samplaski and Kuusi using octave-complex (Shepard) tones (since the music theory to be tested is octave generalised). A problem with such studies is that the perceived similarity of two sonorities or melodies always depends on the number of common tones - or more generally on pitch commonality (Parncutt, 1989). How can this aspect be separated from the data? A idea:

1. Qualitative, explorative stage: Free description of a range of sonorities (What does this chord sound like? What does it remind you of?) --> a list of adjectives

2. Quantitative stage: rate the same set of sonorities against the main adjectives used in 1.

3. Factor analysis to reduce the number of scales (and adjectives); repeat 2.

This approach could enable pc-set similarity to be quantified on a limited number of labelled dimensions. Assuming the dimensions to be independent, the distance between the sets could be calculated in Euclidean space and the results could be compared with music-theoretical predictions.

A further problem with empirical studies of pc-set similarity is that results are different for simultaneous and successive presentation. Since music-theoretical formulations of pc-set similarity do not consider roughness, they may better fit data on successive presentations. Ideally, this means presenting each pc-set in all possible orders - but that would mean an impossibly high number of trials. Instead one could present each set in a random order and average the responses of many listeners.

Affektenlehre and the psychology of musical emotion

This is an interesting opportunity for interaction between historical musicology (or history of music theory) and music psychology. text in preparation

Mood regulation and the origins of music

Saarikallio studied people's use of music to regulate their own mood and identified functions of music such as entertainment, revival/recovery, strong sensation, distraction, emotional discharge, mental work and solace. For children, music has a calming effect, helps them to concentrate, makes them happy and inspires fantasy (at least according to their parents). Such empirically determined lists and classifications could contribute to our understanding of music's ultimate nature and origins. In this study, different behavioral theories of the origin of music will be compared. "Behavioral theory" refers to the idea that a universal non-musical human behavior might represent the origin of music. We will hypothesise that behaviors that are more similar to music are more likely to represent its origin. Similarity can be evaluated by analysing behaviors (including music) into lists of aspects or features and comparing the lists. Relevant behaviors are flirting (Darwin's theory that music is used to attract mates), motherese (Dissanayake's idea that motherese represents the origin of music and perhaps of all arts), and social interaction and cohesion in general (a widely held theory of the origin of music which includes the previous two but is much broader) . The study will begin by surveying theories of the origin of music, separating out those that are based on (almost) universal human behaviors, empirically analysing goals, functions and strategies associated with those behaviors, and comparing the behaviors with each other by comparing the lists.

Why do people download music from the internet?

Many people spend enormous amounts of time downloading music from the internet. Why do they do that? Possible reasons include: they feel good when they hear the music, they are curious about new music by a composer-performer whom they know, they want to be part of a group of people that likes a certain kind of music. Are the emotions experienced while listening to the music is the main reason for downloading it? If so, what kind of emotions are they? This study aims to clarify these issues. Participants are people who often download music. They receive a notebook in which they answer a set of questions every time they download music (or they fill in a form on the computer, or record a message using a mobile phone).  What music did they just download and why? What do they expect to get out of it? Can they describe the feelings they have when listening to the music? Do they know other people who listen to this kind of music? The responses would analysed using standard qualitative methods. it may be possible to get funding from a company with a financial interest in legal internet downloads.

Psychology of audio branding

Audio branding has become an important advertising phenomenon. We hear a few musical tones and are reminded of a company or as product, by analogy to a visual logo. What are the scientific principles behind audio logo design? So far music psychologists have devoted little attention to this interesting topic; there was an interesting study by Brodsky (2011), and there is also an established forum for research on this topic, the Audio Branding Academy.

Emotional properties of everyday sounds

No one is sure why musical sounds are so emotional and where the emotion ultimately comes from, although there is no lack of speculative theories. One possibility is that musical sounds are related to everyday non-musical sounds that have their own intrinsic emotionality. One way to approach this idea is to take a bunch of everyday sounds, present them to listeners, ask them to describe them, extract the emotional aspects of their explanations, and attempt to explain their origins. Or you could ask the listeners to rate the sounds on different emotional or other scales. Christian Kaernbach has recently done some work like this, and meanwhile a new database of everyday sounds is available - which is useful because the sounds are standardized when they come from a public database.

Emotional connotations of major and minor

The emotional valence (the happy/sad dimension) of musical structures and, in particular, the major and minor triads seems to be one of music psychology's great mysteries. Ask ten music psychologists and you will get ten different answers. Countless researchers have wondered how to explain the origin of this phenomenon. Some even try to deny that the problem exists, but a casual survey of the tonalities of pieces of western music that are considered to be joyful and tragic confirms that joyful music is usually major and tragic minor. The exceptions do not seem hard to explain, for example those happy Hungarian or Jewish melodies in minor keys are evidently happy because of other features such as tempo, articulation, timbre and text. Is it simply the case that the major triad sounds happier because it is stable, which in turn is because it is more similar to the harmonic series and therefore has a clearer root (cf. Parncutt, 1988)? The chromaticism of music in minor keys (e.g. J.S. Bach, Mozart) is consistent with the idea that the minor triad is less stable, but also suggests that the happy/sad dimension might depend more directly on the voice-leading than on the triads themselves (cf. Meyer, 1956). To address this issue systematically, it will first be necessary to state the problem clearly. What exactly sounds happy or sad, and in what context? If randomly transposed triads in different inversions are presented to listeners, do the major ones really tend to sound happier? And how do the results depend on whether the listeners can recognize a triad as major or minor? That would be a straightforward first experiment. But music-theoretic considerations suggest that tonalities, not chords, sound happy (major) or sad (minor). If that is the case, one and the same major chord should sound happy if presented in a major-key context (as I, IV or V) and sad if presented in a minor key context (as III, V or VI). This hypothesis would be easy to text in a listening experiment, and the music-psychology community would be interesting to read about the results. The project would begin with a survey of relevant literature from different areas of musicology (not only psychology, but also theory and history). An alternative explanation for the sadness of minor has it that the minor third interval by itself communicates sadness in both music and speech  (Curtis & Bharucha, 2010).  But if that is true, are well-known alternative explanations such as the information-theoretic approach of Meyer (1956) incorrect? Or can two different explanations be correct at the same time? If so, is it a coincidence that two different phenomena reinforce each other? Incidentally, in such projects it is always interesting to use sounds that have been used in other projects so that data can be compared.

Emotional qualities of musical chords

Daniela und Bernd Willimek have published a theory of “musical equilibration” that explains the emotional qualities of specific musical chords or pitch-class sets in Western tonal music. Some examples: The whole-tone scale, depending on how it is implemented in a musical texture, can give a strong feeling of floating or dreaming. Minor triads sound angry if loud and sad if quiet. Loud music based on a diminished seventh chord sounds tends to sound threatening. It would be interesting to test this kind of theory in a controlled psychological study. You would need diverse representative examples of each chord or pitch pattern, or music could be composed to completely control all variables. Listeners might be asked to rate different emotions evoked by music excerpts using standard methods from music psychology, e.g. the Geneva Emotional Music Scales (GEMS).

Music and love (or: music and personal relationships)

The word "love" is taboo in academic discourse. As soon as you mention it, people get embarrassed and start to suspect you are one of those wishy-washy fuzzy pseudo scientists for whom research is either  self-therapy or an ego booster. There are indeed plenty of such researchers in the world, and it is important to maintain critical distance from them. But since love is a powerful emotion upon human survival depends, and since love is often associated with music, music psychologists should be interested in understanding it better, and many in fact are (e.g. Gunter Kreutz).

From an evolutionary viewpoint, love is what you feel when you are behaving in a way that will promote the transmission of your genes to future generations. Love motivates us to reproduce and to look after our children and grandchildren. It also motivates us to care for people who care for us (reciprocal altruism) or might even save our lives in a difficult situation. That music is associated with love is clear simply from the prevalence of love songs in practically all vocal styles, genres and cultures.

How can the link between music and love be better understood? Consider these possibilities:

  • Investigate the relative prevalence love songs in different vocal styles, genres and cultures.
  • List the primary features or characteristics of music and love (based on a mixture of empirical and theoretical studies), and compare them. This is not easy to do! What is similar, what is dissimilar?
  • Investigate how music is heard or used in everyday situations that involve love in some way, from sex and childcare on the one hand to the creation of a positive group atmosphere in diverse situations on the other. Compare music associated with love with music not associated with love (kinds of music, frequency of use, aesthetic appraisal of music, function of music)
  • Survey different theories of the origins of music and consider the extent to which they involve love (defined in evolutionary terms). Relevant theories include Darwin's partner selection theory, the theory of motherese as a foundation for ritual and prenatal conditioning as a foundation for motherese, the theory of music as social glue - something that makes a group of people more coherent, efficient and likely to survive in competition with other groups.

Music and spirituality

A new popular scientific online magazine is addressing the spiritual significance of music from many different standpoints. If you go to their homepage and click on "releases" you will see an advertisement for a book based on musicians' answers to the question "What do you believe is the spiritual significance of music?" That could also be an appropriate topic for a research project in music psychology. Prerequisite is a thorough knowledge of qualititative research methods (e.g. Mayring, 2002).

Strong experiences of music by the deaf

That music plays an important role for people with impaired hearing is clear from the international success of the deaf percussionist Evelyn Glennie. This raises the question of music's emotionality for deaf listeners. The project would essentially involve a repeat of Gabrielsson & Lindström Wik (2003).

The historical development of tonal-harmonic syntax and the origins of tonality

The key profiles of Krumhansl and Kessler (1982) correlate well (r ~ 0.95, df=10) with the pitch salience profiles of major and minor triads (Parncutt, 1988, 1999 a). A possible explanation is that major-minor tonality “emerged” in the Renaissance as major and minor triads became prevalent (although music theorists had not yet named them) - first within harmonic progressions, and later as final sonorities. Composers and improvisers may have maximized the closure of perfect cadences by intuitively adjusting the prevalence of each chromatic pitch class to match the pitch-salience profile of the final triad. Thus, major-minor tonality prolongs the tonic triad (Schenker, 1906).

This has interesting implications for music history, theory and psychology. Historically, one might begin to systematically consider the western history of music perception (cf. Eberlein, 1994). Moment-to-moment expectations during a musical passage depend on repeated exposure to specific musical patterns in the past. One might investigate the history of music perception by statistically analysing a computer database of music representative of each century or period using Huron’s (2002) Humdrum. This is a major project whose results would bring us closer to predicting the syntax of tonal western music (prevalence of specific pitch-time patterns) from a limited number of perceptual and cultural assumptions.

Peak experiences in different art forms

Strong emotions are experienced in art forms that develop with time, from one second or minute to the next: music, drama and literature (as one reads a book), and combinations such as musical drama (opera, musical) and film. Strong emotions are characterized by physiological reactions such as goose bumps, chills down the spine, tears, racing heartbeat, lump in the throat and so on. The emotions experienced when looking at or otherwise experiencing static visual art (paintings, sculpture, architecture) seem to be less strong, presumably due to the absence of temporal change in the art form itself. In this study, randomly selected people will be interviewed about their memory of strong experiences in any art form in order to get some idea of the differences between art forms regarding the strength and kind of emotions that they evoke. An exploratory qualitative study.

Altered states, trance, estacy, flow

This topic has been avoided in scientific research for similar reasons that questions of musical emotion were  avoided - it's hard to test hypotheses on the basis of quantitative data. A further problem: in many musical cultures where people go into trance states in religious ceremonies (generally supported by music), they are constantly moving, which makes any kind of physiological measurement difficult. As technologies improve, this problem is being overcome. Meanwhile there is still a lot to be learned from qualitative studies such as the Diplomarbeit of Graz student Anita Taschler (see also the conference presentations by Taschler and Parncutt, you can download the ppt files from my publications page). An article by ethnomusicologist Judith Becker (2009) in Empirical Musicology Review talked about why this research has been neglected and the difficulties of reconciling the contrasting approaches of humanities and sciences. The apparently universal link between trance, music and religious ritual suggests that an understanding of this phenomenon will help us to understand the original and ultimate function of music. For all these reasons this has become a very interesting area in which to work.

The tonic as home

In music theory in the English language, the tonic is often referred to as the "home key" and a return to the tonic after modulation to other keys (e.g. in a classical development section) is referred to as a "return home". The implication is that the feeling of coming back to the tonic is like coming back home after a journey and re-establishing one's original or genuine identity. For example, Beethoven’s sonata Op. 81a is labeled farewell, absence and return; distance from the tonic in Schubert’s Die schöne Müllerin symbolizes estrangement and death (Youens, 1992).

In everyday life and independently of music, home means familiar faces (family, community) and places (territory). Animals identify and defend home territory; humans invest in creating/maintaining homes. The home has survival value for defence, recovery, healing, nourishment; children’s survival depends on proximity to home (Kahn & Kellert, 2002). The home is decorated with art and cherished objects that reflect/construct the inhabitants’ identity (Sherman & Dacher, 2005) and present a coherent narrative (Woodward, 2001). Home underlies personal identity (Proshansky et al., 1983); nomadic peoples feel strong spiritual attachments to homelands (Strang, 2000).

The emotional connotations of tonic and the home may be related to each other. Tonic return evokes positive emotions; its violation (interrupted cadence) evokes negativity and/or arousal (Meyer, 1956; Steinbeis et al., 2006). Major-minor tonal music is emotionally more positive than atonal for both westerners and non-westerners (Marin & Parncutt, 2007). Is that because its consonant sounds (major and minor triads etc.) are somehow universally pleasant or attractive? Or because the clear hierarchical structure of tonal music means that fewer cognitive resources are needed to process it? Or is tonal music preferred because of its prevalence (familiarity) and prevalent for political reasons (global dominance of western culture)? Or because tonal music somehow satisfies a need to feel at home, or to go away from and return to home?

Given this background, it is interesting to ask whether, or to what extent, the feeling of coming home in a piece of music psychologically is related to the more general (and indeed universal) feeling of coming home. This question may be addressed theoretically or empirically.

Theoretically, one might survey the psychological and sociological literature on the concept of home. What exactly does home mean for people and what is included in a typical home schema? The concept is also relevant for modern problems of migration and integration - to what extent do migrants create a new home for themselves in a foreign place and what factors make this possible or likely? Why is it often so hard for locals to tolerate foreigners coming to their home town and to accept multiculturality as part of their home?

It is not easy to test such a hypothesis empirically. Here are some contrasting approaches.

  • Participants (western musicians and non-musicians) hear short, non-modulating excerpts of tonal music (pop, classical...) that end on clearly defined tonic, dominant or subdominant harmonies. Three different groups of participants rate the endings in different ways. Group 1 learns about tonic, dominant and subdominant with listening examples and tries to identify them directly. Group 2 rates the closure at the end of the excerpts as low, middle or high. Group 3 indicates whether the end feels like coming home, reaching the doorstep, or staying outside or at a great distance (they might click on corresponding icons).The results of such an experiment will not prove or disprove the hypothesis of a psychological connection between home and the tonic, but if Group 3 performs relatively well in spite of the strange instructions, that would be consistent with the hypothesis that the tonic is symbolic of home.
  • An investigation of  individual personality differences with respect to the concept of home, both generally and in music. Is the home more important for some personality types than others, and if so, which? What is the a difference between "human home" (family and friends) and territorial home in this regard? What about gender differences? Do people for whom home is important also prefer tonal music, or specific kinds of tonal music? Can differences in music preference be related to the importance of home?
  • A study of song lyrics (pop/jazz, Lieder/opera, traditional/folk). Do words and meanings related to "home" happen more often in such texts than everyday language or other texts, e.g. the newspaper? This question can be answered statistically by making a list of words related to "home", counting how often they happen in well-known pop songs (this can easily be done in a big pop lyric database such as the Risa Song Lyrics Archive www.risa.co.uk/sla/) and comparing relative frequencies with equivalent analyses of other available text corpora.

A psychoacoustically based, computer assisted music theory

Pitch-class set theory (Forte, 1973) and pitch salience theory (Parncutt, 1988) might be usefully combined in computer-assisted music theory pedagogy. The project would involve two stages: writing and testing the software, and testing its pedagogical effectiveness in collaboration with music theorists and composers. The project may involve collaboration with KUG Institutes 1 & 16.

Psychological reality of published music analyses

Look in the music analysis literature for different analyses of the same pieces. The pieces should be relatively short. For each piece, carry out a series of psychological analyses, e.g.

  • Which basic emotions are experienced when the piece is heard? Track variations in valence and arousal and emotional intensity.
  • Where do listeners perceive the start of a new section (segmentation)?
  • Other tests corresponding to the kinds of analysis found in the literature.

In each case, compare the empirical results with analytical results and commonalites and differences analysed. Since results depend not only on the score of the piece to be analysed but also on the interpretation, find several different recorded performances of each piece and present different performances to different participants. The participants themselves could be a mixture of musicians and non-musicians.

Psychological testing of alternative music notations

Conventional music notation, in which the tones of a diatonic scale correspond to the lines and spaces of a musical staff, may not be ideally suited for music in which every pitch in the chromatic scale occurs regularly, i.e. for the Western music of the past few centuries. The main problems are that conventional notation represents 12 pitches per octave by means of 7 vertical positions plus sharps and flats, and that it represents the same pitch class quite differently in different octave registers. In response to these problems, countless alternative notations have been developed and proposed in recent centuries. Read (1987) wrote a book about them, and the Music Notation Modernization Association attempted to evaluated many of them systematically. Othe reason why none of these alternative notations has caught on is presumably that it takes a lot of time and effort to learn a new notation system. Not only professional musicians, but also musicologists (including music psychologists and music theorists) invest enormous amounts of time learning to read conventional music notation. Understandable, they don't want to have to start again from scratch. So they tend to avoid the problem of conventional notation's shortcomings and the evaluation of alternatives by regarding the problem either as irrelevant ("conventional notation obviously cannot be improved") or impossible to solve ("it is clearly impossible to decide among the many possible alternatives"). But perhaps the real reason is that it is not worth learning an alternative notation unless a very large library of musical scores in that notation exists, so that one can always find the score of a specific piece. Whatever the reason, the problem has achieved a kind of taboo status. Experience with other academic taboos (think for example of the role of sexuality in music analysis) suggests that this taboo will one day be broken.

In recent years, the question of alternative notations has again become interesting - for a quite different reason. Modern computing technology makes it possible to automatically transcribe printed music in conventional western notation into other systems. This means that it is finally worth investing the time and effort into learning an alternative system.

In Parncutt (1999), I presented an experiment to compare different alternative music notations. The experiment has never actually been done. The idea is to break conventional music notation down into separate components and test each of these components by comparision to other possibilities. I now have access to a tailor-made computer program based on Finale that converts Finale data files into alternative notations. In collaboration with the author of the program, this could be used both to prepare the experimental stimuli and, independently of the empirical project, to transcribe music into alternative notations.

Forward motion in chord progressions

Chord progressions in which roots descend by fifths or thirds (e.g. C-F, C-a) are more common in Western music of the 17th-19th Centuries, as well as Jazz in the 20th, than progressions in which roots ascend by fifths or thirds (e.g. C-G. C-e) (Parncutt, 2004). Consider for example the familiar progressions ii-V-I,  I-vi-iv-ii-V-I, I-vi-ii-V-I. Why? This asymmetry is by no means a universal feature of major-minor (ish) tonalities such as Renaissance polyphony (see my presentation at MedRen 2000) and pop/rock harmony (see Temperley's paper at ICMPC 2000). One possiblity is that 17th-19th listeners preferred progressions in which tones implied by the first chord are realised by the second. According to virtual pitch theory (Terhardt, 1976), the chord CEG implies pitches at the missing fundamentals F and A, which are “realised” if the following chord is FAC or ACE but if it is GBD or EBG. A further complication is that the rising-falling asymmetry is clearer in the musical literature than in similarity judgments of successive musical chords (Parncutt, 1993). That experiment could be repeated using different chords, durations, listeners and musical contexts, and the asymmetry modelled mathematically.

Fifth relationships between successive chord-roots

The most prevalent chord progressions in tonal harmony have fourth and fifth intervals between successive chord roots. Why? A possible answer is that the perfect fifth interval is the most consonant interval after the octave. But it is unclear what that means in the case of successive tones that are also chord roots. Consider this simpler, more straightforward explanation: if you are performing vocal polyphony in a 15th-Century church, it will be easier to sing in tune (or to find the pitches at all) if successive chords have at least one tone in common. If two triads have two tones in common, only one tone will move, so we can hardly speak of harmonic progression any more. This simple logic we can explain the predominance of triadic progressions with one common tone. And if both triads are composed of tones belonging to the same diatonic scale, the interval between the roots will automatically be a fourth or fifth. Thus, we don't need to know anything about Pythagorean number-ratio theory in order to explain the predominance of fourth and fifth intervals between successive chord roots. Nor do we need to assume that the cycle of fifths has psychological reality, as some research in cognitive music psychology has done. This study would primarily involve a new analysis of selected pieces of unaccompanied polyphony from the 14th-16th Centuries. Given two successive "sonorities" (and regardless of how composers and theorists of the time might have  thought about "harmony" or "sonority"), how many tones do they usually have in common, under what circumstances? Are suspensions used to increase the effective number of tones in common? Is there plausible evidence in the music of this period that the origin of fourth/fifth relationships between successive roots in major-minor music lies mainly in practical limitations on the number of common tones between successive chords?

Empirical determination and tracking of musical key using probe triads

The probe tone method of Krumhansl and Kessler (1982) enabled music psychologists to make significant contributions to music theory by quantifying the theoretical concept of stability - the tonic scale degree as stable, the leading tone as unstable, and other tones with different levels of stability. A fundamental problem with their method is the assumption that the tonic is a tone. That may be true in many musical styles, but Western major-minor tonality is surely unusual in that it is based on sonorities of several voices, which can also function as tonics in their own right. In other words, the tonic of a major or minor key may be either the tonic triad or its root, the tonic tone. Consistent with that idea, Riemann developed a theory of tonal function in which dominant and subdominant triads are perceived relative to the tonic triad, and Schenker explained tonal works as prolongations of their tonic triads. The idea has interesting empirical implications:  the tonality of a passage may be determined by presenting it followed by one of 24 major and minor triads and asking how well the triad follows the passage. The results of such an experiment would presumably be similar to Krumhansl's more complex (less parsimonious) method, explained in detail in Krumhansl (1990), in which listeners rated how well probe tones followed passages were used and tone profiles were compared (correlated) with the standard profiles of major and minor keys. A study in which results obtained from these two methods were compared with the judgments of music theorists could shed light on the nature of the tonal reference in major-minor music: Is it a tone or a triad? Or if it is both, to what extent is it one or the other? The results could also have interesting implications for key-finding models, to which a whole issue of the journal Music Perception was devoted (Vos & Leman, 2001).

Perceptual bases of Schenkerian theory

In Parncutt (1996), I considered some of the unexplored similarities between principles of Schenkerian theory and analysis (as summarized e.g. by Drabkin, 2002; Forte & Gilbert, 1982; Larson, in press) and psychological principles (e.g. Huron, 2001; Parncutt, 2004). Is a compound melody explicable by Bregman's (1993) theory of streaming? Are neighbor tones explicable by van Noorden's (1975) concept of fusion? Can implied tones be predicted by combining harmonic pattern recognition and streaming? Does a linear intervallic pattern obey Gesalt principles of good continuation? Is tonicization an increase in pitch salience? Are registral shifts due to the pitch ambiguity of harmonic complex tones? Is diminution related to Chomsky's generative grammars? Is the Ursatz a psychological schema? This project would be mainly theoretical and you would need a good background in both music theory and music psychology to tackle it.

Immanent versus performed accents

I would like to bring together pianists and psychologists to bridge the gaps between their approaches to research practical issues. A possible experiment: If the theory of musical accent presented in Parncutt (2003) (cf. Palmer and Hutchins, 2006) is valid, it should be possible (1) to help performers prepare performances by analysing their repertoire, and their performances of it, according to the theory; and (2) to describe the unconscious decisions made by performers when interpreting a piece of music (cf. Clarke, 1995). Pianists would be suitable participants as they tend to think analytically and, when playing alone, have control over the entire musical texture. The project may involve collaboration with KUG.

Tempo and tonality

Expressive music performance includes tempo changes that are not notated. For example, a pianist may speed up during a "development" section in which different tonalities are visited and themes varied - an effect sometimes called stretto. Consider the following two well-known examples from the piano repertoire: the middle section of Schumann's Träumerei, and the episodes between repetitions of the rondo theme in Beethoven's Für Elise. In this project we test the hypothesis that in the performance of tonal music, performance tempo is slightly slower in passages in the tonic key and in thematic passages. By "thematic passage" I mean the duration of a well-defined melody or theme (in the theory of sonata form, for example, the first and second subjects), as opposed to transitional passage or bridges. The project will involve selecting a set of repertoire for analysis according to clear criteria (say, 20 contrasting pieces), finding many commercial recordings of that repertoire (say 10, contrasting recordings per piece), analysing the score for passages in the tonic key and thematic passages, measuring the duration of those passages in the recordings and dividing by the number of measures to calculate the tempo, and statistically analysing the data. If the hypotheses are confirmed, we will then ask whether they are an artifact of some other effect. For example, perhaps pianists tend to speed up when there are more notes per measure (Sundberg and Friberg called this effect "the faster, the faster" in their performance rule system). For the first movement of Beethoven's Waldstein sonata, this idea would predict that the first subject in C major is performed faster than the second subject in E major, which would contradict our original hypothesis. To test this, we will calculate the mean number of notes per bar in each analysed section of the selected works and compare those values with the measured tempos.

Subjective duration of a passage of music

In pieces of music of different tempos, we seem to experience time passing either more slowly or more quickly. If we are entranced by the music, suddenly a long period of time has elapsed, as if the time had been passing quickly. The literature on this important question is not very clear and future research may yield new insights. Is there a relationship to the literature on the psychology of meditation? One idea is to adapt existing experimental paradigms for music, e.g. Tse et al. (2004). Important reading: The Diplomarbeit of Annekatrin Kessler (Uni Graz library).

Pedagogical application of automatic fingering models

In Parncutt et al. (1997) I developed an algorithm for fingering melodic fragments in piano performance. Meanwhile computer programs have been developed for fingering guitar and violin. The question is now whether these models can be applied in music education. One possibility is to teach music students the underlying principles of these algorithms and find out if they find those principles useful when deciding on fingerings. Another possibility is to develop a user-friendly interface that musicians can use. The interface would systematically offer different fingering possibilities for given passages that students could then try out in order to expand their fingering vocabulary. Participants in such a project would be limited to those with a relatively analytical and systematic approach to technical problems. A possible hypothesis is that a systematic, computer-supported approach to fingering problems can usefully complement existing approaches to technical development such as scales, arpeggios and technical exercises.

Structured listening to recover a performer's conception of musical structure

How, and how successfully do performers communicate structure? In this experiment, performers (probably pianists, as piano MIDI data are easier to handle and control) listen to their own performances of a given repertoire and mark salient events on the score - either immanent (in the score, independent of performance) and expressive (performed). Other listeners do the same. The results are compared to find out what kinds of structural intention are most successfully realised. The results are also be compared with MIDI data to understand how specific intentions are realised (e.g. in terms of tempo and dynamic curves) and in an attempt to explain why certain intentions are easier to communicate than others.

Rhythm, walking, heartbeat

What is the origin of musical rhythm? What role might walking and heartbeats have played? In this experiment, participants would be “wired up” to simultaneously record the times at which their feet hit the ground and their hearts beat during normal everyday activities. This could be done using commercially available jogging equipment (Barnett, 2003; Shoji, 2004; see also the New York Times article at the foot of this page). But it might be easier just to use some kind of a mobile phone app. In any case, it could be done for 24 hours or even longer. Walking and heartrate distributions (mode, width, asymmetry…) would be compared with musical tempo distributions. The experiment would leave open the question of the causal link between these distributions; one possibility is my theory of the prenatal origins of music. An interesting reference: Franek et al. (2014).

Spontaneous dance movements

In a recent study at the University of Jyväskylä by Luck, Saarikallio, Thompson, Burger and Toiviainen (presented at ICMPC 2000 in Seattle) volunteers were asked to dance to different styles of music. Their spontaneous dance movements were compared with (i) the musical style and (ii) the personality of the dancer. Data were collected with a sophisticated optical motion capture system, but many of the results could also have obtained by low-tech systematic observation (each method has specific advantages and disadvantages). In this study, volunteers (e.g. musicology and psychology students) will be asked to dance alone to selections of music corresponding to different styles (rock, funk, folk, bebop...). Better, a party will be organised at which the guests agree in advance to having their dance movements videoed and viewed later by specific named people (members of the research team). That raises the interesting possibility (never before investigated, as far as I know) of systematically studying the effect of alcohol on dance movements in different styles and for different personalities (e.g. first record dance movements with no alcohol, then after one standard drink, then two, then three). This question is socially relevant (many people would be interested in the results) and the proposed method would have a relatively high degree of ecological validity (i.e. the situation would be relatively natural). Data analysis will involve both free description (qualitative analysis) and ratings (quantitative analysis). The videos will be viewed (mainly without sound) by student volunteers who are given clear instructions and some practice. In quantitative analyses, for example, the observers will rate the size and quality (e.g. smooth versus jagged) of movements in different parts of the body, without knowing which music was playing. If several dancers are recorded simultaneously, the observers will also describe or rate aspects of their spontaneous interactions. At the end, the dancers will be asked to complete a personality evaluation questionnaire. Results will be compared with published results of the Jyväskylä group.

Effect of analytical or intellectual engagement on enjoyment of music

Some musicians report that their enjoyment of music deteriorates when they analyse the music or their performance of it. Others report just the opposite. To understand this phenomenon, one might first interview music students about the effect of analytical knowledge on their enjoyment of music, separating different music styles and kinds of academic engagement (analysis, performance, history, psychology and so on). One might also conduct a longitudinal study on participants in an ear training or music theory course.

Rhythmic hysteresis: From controlled psychological experiment to complex sound installation

Supervisors: Gerhard Eckel and Richard Parncutt (2014)

Rhythmic hysteresis is an interesting and barely studied music-psychological phenomenon. The perceived regularity or irregularity of sequence of rhythmic events depends on how it is gradually approached. Consider a sequence of interonset intervals in the ratio a:b:a:b etc. If a=b=1, the sequence is isochronous. If a:b=1.1:1, we perceive a slight irregularity, depending on tempo. If the ratio is changed gradually from 1 to 1.1, the perceived irregularity at the end of the process is not the same as if we change it gradually from 1.2 to 1.1. Or said another way: if we change the ratio gradually from regular to irregular, the point at which we detect the boundary between regular and irregular depends on the direction of the change. This effect is called rhythmic hysteresis. In the planned study we will find out where this point is in different circumstances. In the simplest case listeners hear a sequence that gradually changes from regular to irregular or vice versa and tap a key when they hear a change. In a more complex case, the intervals between successive tones may include a random element which is more typical of real music (consider for example Al Bregman’s well-known sound example on Streaming in African xylophone music). In this case, we expect the listener’s sensitivity to irregularity to be reduced, but the hysteresis effect should remain. In an even more complex case, the sounds may be produced by a spatial arrangement of loudspeakers. When the listener moves around in this space, a regular sequence gradually becomes irregular because sound takes a certain time to travel from each speaker to the ear. An internet simulation of such a space (sound installation) can be found here: iem.at/~eckel/Zeitraum/.

The planned experiment is suitable for a Master’s student who has studied sound and music computing, musicology and/or psychology. S/he should be able to program in JavaScript (the script for the installation simulation can easily be downloaded from the internet); to program a psychological experiment in a user-friendly software system such  as psychopy; and statistically analyse the data, for example in SPSS. S/he should have some background knowledge and practical experience in performing psychological experiments. An initial task for the student will be to survey the relevant literature; the phenomenon may have been described using a term other than “hysteresis”, so it may also be necessary to ask international rhythm perception researchers for references and advice. A successful student may become first author of an article submitted to a major international journal such as the Journal of New Music Research or Music Perception.

Emotion in Western classical music

Juslin and Persson (2002) analysed emotional expression in music by separating individual acoustic cues in structural parameters such as tempo, articulation, dynamic, timbre, timing, duational contrast, as well as variations in all such parameters. It would be an interesting project to analyse how a well-known Western composer (such as for example Mozart) communicated different emotions through written scores. The question sounds simple, but experience suggests that a detailed analysis can improve understanding and may even bring some surprises. The project might first involve selecting a number of short excerpts from the repertoire of that composer. The criterion for selection might simply be preference, which tends to reflect emotional intensity: just ask people which music they like the most, or use music for which the greatest number of recordings are available. Then ask listeners in a pretest what specific emotions are communicated by the excerpts. Since these data will depend on the interpretation (acoustic realisation) of each piece, compare contrasting interpretations. Then analyse the structure of the corresponding original scores by objective procedures that can be accurately described in your method section. For example, how many different note durations occur in the excerpt, and what is the relative frequency of occurrence of each (note duration distribution)? How stable is the tonality (pitch class distribution or tone profile), to what extent is the excerpt in a major or minor tonality, and how often to dissonances happen relative to consonances? On the basis of such data it may be possible to draw up a table of specific ways in which that composer communicates specific emotions in music notation, analogous to Juslin's table, and presumably also similar to it.

The musical emotions nostalgia and sentimentality

Emotions such as nostalgia, magic, movement and arousal are more common in music than in everyday life (Scherer et al., 2001-02). To understand why this is so, one might survey humanities literature (history, philosophy, aesthetics, cultural studies, semiotics); create a library of nostalgic/sentimental music; have listeners rate each item’s expression of other emotions; ask musicians which musical structures evoke nostalgia and sentimentality (cf. Sloboda, 1991); and explore the evolutionary functions of these emotions. This project may involve collaboration with the Institut für Wertungsforschung der KUG.

Individual differences in motherese and personality

All parents and adults speak differently when the person to whom they happen to be "talking" is a baby. To what extent do parents differ from each other, and what does that depend on? In this experiment, parents will be recorded while engaging in motherese - musical vocal-gestural enchanges with their babies. The degree to which their speech shows typical characteristics of motherese (e.g. higher than normal frequency, exaggerated changes in frequency) will be evaluated by independent listeners. Perhaps different motherese styles can also be qualitatively characterized. These data will then be compared with independently gathered data on the personalities of the parents and perhaps also of the babies, using current theories of personality and standard scales and questionnaires. It might also be interesting to compare these data with the parents' own subjective accounts of their personality, the personality of their babies, and the strength and nature of their own motherese. The ultimate aim of the study is to understand the functions and variability of motherese in more detail, in order to get more insight into the nature, ontogenesis and phylogenesis of music.

Differences between motherese and regular speech

What is the difference between infant-directed and adult-directed speech? A lot of research has been done on frequency contours and duration patterns in motherese, as well as syntax and semantics (e.g. the prevalence of specific parts of speech such as nouns), but what about specific vowels and consonants? Since the aim of motherese is often to make the baby smile (which motivates the adult to continue playing the motherese game), one might hypothesize that the "ee" sound associated with smiling happens more often in motherese than in regular speech. That is, not only is the mean fundamental frequency of motherese higher than normal, but also the mean formant frequency. If that is true, it can be explained in another way: the baby's formant frequencies are higher due to its shorter vocal tract. Whatever the explanation, the first question to answer is an empirical one: are "ee" sounds more common in motherese than in regular speech? And does that explain the "ee" endings of "baby words" like teady, horsey or Johnny? The project would involve transcribing the text of a few hours of infant-directed speech by different adults and comparing the results with recordings of natural improvised speech by the same speakers. The results have implications for emotional connotations of timbre: bright timbres tend to be associated with positive emotional valence, and these are possible reasons. That raises an additional question: are babies directly sensitive to the association between sound timbre and emotional valence (both within and outside speech), or must they see the smiling lips of the adult before they make the connection?

Motherese in children's story CDs

There are many studies on motherese (infant-adult vocal play). The exaggerated contours of motherese attract the baby's attention, strengthen bonding, and promote language acquisition. To what extent are similar techniques used by adults who read stories aloud to children? To what extent may these techniques be regarded as musical or at least pre- or proto-musical? Are adults reading children's books aware that they are speaking motherese? The project might involve comparing recordings of motherese in natural situations with commercially available CDs of children's stories and/or with recordings of adults reading stories to children. The comparison could be both quantitative and qualitative. Quantitatively, one could track the fundamental frequency and compare statistics such as the mean and standard deviation of the fundamental frequency as well as its first derivative (i.e. how quickly it changes). It would also be interesting to compare the mean and standard deviaion of tempo measured simply as number of syllables per second. Qualitatively, listeners could continuously evaluate the emotional content both of the original recordings and low-pass-filtered recordings in which little more than the fundamental frequency is audible. The quantitative evaluations could then be compared with the quantitative measures.

Emotional communication between mother and fetus

Why do musical sound patterns evoke strong emotional responses? In Parncutt (2006 b) and Parncutt & Kessler (2006), I asked whether the human fetus perceives the (emotional) state of the mother via the sound of her voice, heartbeat, breathing, movements, footsteps and digestive sounds. If so, fetal behaviour should depend on maternal state (cf. Mastropieri & Turkewitz, 1999). Participants would be pregnant women in the third trimester. Their (emotional) state and fetal movement would be monitored physiologically and subjectively, by cellphone calls. For this project to be successful, it would be necessary first to establish good connections with relevant departments of the LKH or with Sanatoria.

Recognition of emotional states from internal body sounds

Can one judge a person’s emotional state from their internal body sounds? The question is relevant for the theory that the origin of musical emotion lies in prenatal associations between sound, movement and emotion (Parncutt, 2006; Parncutt & Kessler, 2006). This question could be tackled empirically as follows. First, survey the medical literature on auscultation - listening to internal body sounds using a stethoscope to diagnose the cardiovascular system (heart murmurs and gallops), the respiratory system (wheezes and crackles) and the gastrointestinal system. Such sounds could be presented to listeners who would be asked to rate their emotional content. Or computer composers could try incorporating these sounds into their music and later speak about their emotional connotations. A follow-up study could be run as follows:

  • Participants would monitor their emotional state throughout a normal day. At the same time, internal sound recordings would be made from skin contact microphones.
  • The recordings would be analysed for relatively large changes in the general pattern. Sections of the recordings around and following such changes would be isolated for further analysis.
  • Listeners would describe the (emotional) quality of those passages, without knowing their origin.

Ecological theory of electroacoustic music

In an ecological approach, one might expect electroacoustic (or acousmatic) music to be preferred by listeners if they are able to imagine virtual objects that produce the sounds. The more precisely they can describe those objects and the more consistent their descriptions are (both within and between listeners), the more they should like the music. The aim of this study would be to test that hypothesis. Listeners with different musical preferences, amounts of musical expertise and kinds of listening experience would be presented with short excerpts from a wide range of electroacoustic styles that focus on timbre and avoid familiar tonalities, meters and forms. The listeners would be asked either to describe the objects that make the sounds that they hear or to evaluate the music (cf. DiScipios's theory of audible ecosystems).

Melodies of residue tones

Dowling investigated the relative importance of contour and rhythm for the recognition of melodies. By comparision, how important is pitch salience? Huron (2001) emphasized the importance of pitch salience for counterpoint. Pitch salience can be manipulated by removing different numbers of lower harmonics and by taking random selections of harmonics in a given range of harmonic numbers or of frequency. Listeners could be asked either to recognize or (in the case of musically trained listeners) to transcribe the melodies.

Pitch shifts of complex tones

This is a fascinating and neglected topic in music psychology. It could become a PhD project because of the need for new data in a number of different areas of interest. The project could start by replicating some of the older studies with modern digital control (Allanson & Schenkel, 1965; Hesse, 1987; Sonntag, 1983; Terhardt, 1988; Terhardt & Fastl, 1971; Webster & Schubert, 1954; Webster et al., 1952). On the way you could study individual differences and what they depend on. A more musical task would be to find sound recordings in which the entry of the melody sounds out of tune by a semitone or more (which can be tested by taking a short slice of sound, say 500 ms, and asking musicians to transcribe the chord) but both listeners and performers quickly adapt to this mistuning and no-one notices. It would be interesting to study that process.

Most musicians have experienced pitch shifts. Imagine you are trying to sleep and but there is loud rock music playing somewhere outside. You hear mainly the bass line thumping away but sometimes you also hear the melody. Strangely, the singer seems to be singing in a different key. But if you hear the music at normal loudness level with normal balance it sounds fine. Another example: you are listening to pop songs on headphones in an airplane. There is a lot of background noise from the jet engines and the headphones don't fit properly into your ears. You can hear the singer's voice, but the accompaniment is hard to hear and when you do hear it, it sounds out of tune. In both these cases the (physical) frequencies are exactly the same in all listening conditions, but the (experienced) pitches are not. In general, pitch depends on SPL relationships and masking - an effect known as pitch shift.

A central tenet of Terhardt's pitch theory is that the exact perceived pitch of any partial (spectral pitch) within a complex tone depends on its SPL and on the frequency and amplitude of other, nearby partials. Thus, the spectral pitches within a typical complex tone are all slightly shifted. This, he proposed, distorts the pattern of pitches within harmonic complex tones to which we are exposed in everyday sounds and especially speech: the harmonic series "template" that the brain may be considered to use to identify fundamental frequencies in everyday running spectra is slightly stretched. This was Terhardt's explanation of the octave stretch phenomenon (listeners prefer octaves that are slightly stretched relative to a frequency ratio of 2:1, and octaves in music performance also tend to be slightly stretched). All this was incorporated into a rather complex mathematical model (algorithm) whose predictions were tested against a range of empirical data (Terhardt et al., 1982). As far as I know, this is still the only available computer model that takes any sound as its input and attempts to predict all perceived pitches including their shifts and their saliences. It is also still the best explanation for octave stretch.

Recent years have seen the publication of many papers on the phenomenon of pitch shift that go well beyond Terhardt's approach and contradict some of his findings and assumptions (just type "Terhardt 'pitch shift' octave stretch" into Google Scholar). For example, pitch shifts can be produced by by interactions with the harmonic "template" or by time-domain effects (phase differences between partials). These recent papers seldom discuss the musical relevance of their findings.

A masters or doctoral project might first involve a survey of this recent work and the question of its musical relevance (it is especially important for music theory: pitch is not the same as frequency, so any music theory based on frequencies is suspect), followed by an experiment such as the following. Take harmonic complex tones with different numbers of missing lower harmonics: for example, the lowest harmonic might be number 1, 2, 3, 4, 5 or 6. Have the listener adjust the frequency of a pure comparison tone until the two tones have the same pitch (in the usual way). The listener must actively adjust the pitch of the two tones to be the same, hearing the two tones in succession many times; if they hear two successive tones and indicate passively which tone is higher, their data will depend on timbre as well as pitch, and the results will be uninterpretable (a few published studies have suffered from this problem and neither authors nor reviewers do not seem to have realised it). The experiment must be conducted using a good sound card and good headphones, and good measuring equipment may be required to check that the sound pressure levels are correct and distortion inaudible (ask at KUG:IEM). How big are the observed interval stretches? Do they correspond to empirically observed stretching in musical intervals greater than or equal to one octave (e.g. Rakowski et al in the proceedings of ESCOM 2003)?

Since quite considerable individual differences in the size of pitch shifts have been observed, it would be interesting to run an additional experiment on each listener in which the pitch of a loud pure tone is compared with that of a quiet one (pitch shift due to SPL) and in which the pitches of two pure tones in a simultaneous dyad (say a minor third) of pure tones are measured. Listeners should be trained musicians who are good at focusing on pitch and ignoring other parameters like timbre and loudness, and it may be necessary to separately investigate this ability.

Perceptual basis of Riemann's functional harmony

According to Riemann, all chords can be interpreted as either tonic T, dominant D or subdominant S. In a well-known undergraduate textbook, Dieter de la Motte labeled the diatonic major and minor triads in a major scale T, Sp, Dp or Tg, S, D, Tp; in a minor scale, t, dG or tP, s, D, sP or tG. This theory raises some interesting psychological questions:

  • To what extent are Riemann's harmonic functions confirmed by psychoacoustical similarity judgments between chords and functional stereotypes? For example, does an E-minor triad in a C-major tonal context sound more similar to a C-major triad (hence the function Tg) or to a G-major triad (Dp)?
  • What is special about the tonic, subdominant and dominant triads? Could Riemann have adopted another set of chords as reference points? Do untrained listeners intuitively understand that these chords are reference points - and if so, to what extent?
  • Is Riemann's concept of functional harmony so fundamental that ear training courses can benefit by building on that concept? To what extent can listeners be trained to classify chords in a progression into T, S, and D (as if recognizing timbres), and later build on this skill by making more precise classifications?

Chord-scale compatibility in jazz

An important question in jazz theory and pedagogy is chord-scale compatibility: which scale goes with which chord (see wiki jazz scale)? There is no simple or generally accepted answer. Of course the chord tones must be part of the scale, and the scale itself should not include successive semitones (Pressing, Jazzforschung, 1977-78). But that leaves open many possibilities, e.g.: the scale should correspond to that of the main tonality, the scale should be as consonant as possible (with the greatest number of perfect fifths between scale steps), local leading tones (semitones below chord tones) should be preferred, and scale tones should be implied by the chord according to Terhardt's theory of pitch perception (Parncutt, 1988). Ideas of this kind can be tested by comparing their predictions with a statistical analysis of a database of transcribed improvised solos over different chord progressions. The progressions should contain a variety of chord types (e.g. chords based on 4 pcs - different kinds of seventh chord). 12-bar improvisations would be inadequate for this purpose, because they mainly involve major-minor ("dominant") seventh chords. From such a database it would be possible to estimate the probability of a given scale tone happening in the context of a given chord (cf. Järvinen's article in Music Perception in which he reproduced Krumhansl's key profiles from blues improvisations). Ideally, the calculations would be carried out on computer (first encode the transcriptions, then analyse them), but it would also be possible, and probably faster, to do the calculations by hand. In the write-up, statistical results might first be presented in conjunction with notated examples, and then compared with predictions based on the theories listed above.

Prevalence of jazz chord symbols

Why do some chords happen more often than others? The theory of Parncutt (1988) suggests that chords happen more often if they include root-support intervals (P1, P5, M3, m7, M2) above the root. Assuming that bebop jazz is based on 7th chords (rather than triads), the most common seventh chord is the one simply called "7" on jazz lead sheets and "Mm7" by music theorists because is includes four of these root supports: P1, P5, M3, m7 (if the root was missing, P1 would be missing from this list). The most common ninth chord "9" has all five root supports. If we weight the root supports relative to each other (P1 more important than P5, etc.) we can build a simple model of the prevalence of a chord (symbol) based on the root supports it contains. Of course the matter is more complicated than that, and to make progress we first need objective data about how often chords happen in real music. The first task of this project would therefore be simply to count thousands of chord symbols in jazz lead sheets such as the various "real books" and "cheat books" - assuming that they correspond to what we are used to hearing in real music. The second task would be to develop a model to account for the data. If the model works, it may be regarded in the future as a fundamental contribution to jazz theory.

Timbre and emotional implications of musical chords

Music theorists and analysts often allude to the different timbres and emotional implications of musical chords, but no study has described those timbres and implications, either qualitatively (in everday language) or quantitatively (on the basis of similarity judgments and multidimensional scaling solutions), in a way that has found its way back into music theory. How can the timbre and emotional flavour of a diminished seventh chord be described by comparison to that of a minor triad?

Expression in Baroque harpsichord music

Louis Couperin and Jean-Francois Dandrieu wrote music for harpsichord that specifies only pitches and pitch successions, but no durational contrasts (all tones are notated as whole notes). An analysis of timing and dynamics in performances of such works could yield interesting information about performance conventions for that music, and perhaps more general expressive principles. Data could be both MIDI data from real performances by local musicians and data extracted from commercially available performances.

Mood regulation through music: musicians versus non-musicians

Several studies have suggested that an important function of music is emotional self-regulation (see published studies by Saarikallio). For example, people put on a CD to put themselves in a certain kind of mood. Or they attend a choir rehearsal to forget about work. The question arises as to the effect of musical experience or expertise on such behaviors. Do musicians (or music students) regulate their mood this way, and if so, do they do it more or less often than non-musicians (or other students)? Do the two groups have different strategies? Which group can better explain what they are doing and why (metacognition)? One way to ask these questions is simply to interview people, but the results would be effected by participants' ability to describe their behavior (metacognition), which may be largely automatic and intuitive. Another idea is to contact people by mobile telephone at times of day when (according to self report) the chance is relatively high that they are consciously using music to influence their mood - or music is influencing their mood, whether they like it or not. Then ask them specific questions - what exactly is happening, what are they doing, what kind of music is it, who is in control of the music, how do they feel right now etc.

Self-efficacy and mood regulation through music

Why do some children seem more musically talented than others? One approach to this problem involves attributions (Zuschreibungen; tacit explanations for success and failure) and self-efficacy (Selbstwirksamkeit; Austin et al., 2006; Bandura, 1977; Painsi, 2003). Children who believe that they can improve their musical skills through practice (which is realistic, because it applies to almost all children) make faster progress than other children, because they are more likely to persist when the going gets tough. They are also more likely to enjoy the process of learning, and be less dependent on the reward that is felt when a goal is achieved quickly and effortlessly. The application of the concept of self-efficacy to musical performance raises the question of whether it can also be applied in the complementary area of music perception. Research on music in everyday life has repeatedly demonstrated that people use music to manipulate their mood (e.g. Saarikallio & Erkkilä, 2007). For example, one might listen to a certain kind of music to get into the mood for going out in the evening, and another kind to recover from a serious loss. The question that I would like to ask in this project is whether this deliberate use of music is more common among people with high self-efficacy or whether it is independent of self-efficacy. The project would involve measuring the self-efficacy of a random population using a questionnaire, exploring how that same group interacts with music in everday life by means of another questionnaire, and comparing both quantitative and qualitative data from both questionnaires. Both questionnaires should as far as possible be taken from current literature and standardised so that results can be compared across studies.

Music selection and learned helplessness

According to some sociological studies, people who feel that they have little control over their own lives ("learned helplessness") tend to watch soap operas more often that other people do. A possible explanation is that soap operas compensate for their feeling of helplessness. That raises the question of whether learned helplessness affects the choice of music to listen to. If music often behaves like a virtual person, or like virtual people involved in some kind of drama (Parncutt & Kessler, 2006), we might expect an effect. Do people with learned helplessness prefer different kinds of music from other people? For this study you would need a standardised questionnaire about learned helplessness. You could simultaneously test the relationship between other personality traits and music preferences.

Music and minorities in Graz

A seminar on the role of music in cultural integration was held in summer 2010.  Six student groups interviewed representatives of six cultural groups in Graz. A masters or doctoral student could compare and reprocess their qualitative data and complement it with repeated, in-depth interviews with selected participants or participant responses to existing analyses. Results could have interesting political implications. Representatives of all political parties will agree that migration has specific advantages but also creates problems. Does musical diversity belong to the advantages? Can music be used to address or solve some of the problems? Could the City of Graz promote integration through music? If so, how? A good masters thesis could then be condensed into an article submission to the journal Music and Arts in Action.

In 2021, the following idea was added. Music can contribute enormously to quality of life for people with disabilities. It would be interesting to talk about that to members of the following very successful band from Bruck and der Mur: https://www.pius-mundwerk.at/. This would be a qualitative study, so it is important to be clear in advance about methods and approach, including transcription and analysis methods.

What is a musical style?

The faculty of humanities has a doctoral programm ("Doktoratskolleg") entitled „Kategorien und Typologien in den Kulturwissenschaften“ . There are many possibilities for music psychological or systematic-musicological research within this area. For example there has been a lot of research recently on automatic style recognition in the international music information retrieval community. A researcher might for example attempt to automatically categorize a large set of mp3 files into different style categories such as pop, rock, jazz, classical and romantic - a difficult task, since there is so much sonic diversity within each such category. A style is a typical case of a cultural category: style categories help us to understand music in its diversity and in its cultural and historical contexts, but style boundaries are difficult to locate and depend themselves on cultural, historical and academic context. What are the implications of that research in music information retrieval for musicology in general?

References

Altenmüller, E., Wiesendanger, M., & Kesselring, J. (Eds.) (2006). Music, motor control and the brain. Oxford, England: Oxford University Press. Aures, W. (1985). Ein Berechnungsverfahren der Rauhigkeit. Acustica, 58, 268-281.

Austen, J., Renwick, J., & McPherson, G. E. (2006). Developing motivation. In G. E. McPherson (Ed.), The child as musician (pp. 211-238). Oxford: Oxford University Press.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84,191-215.

Barnett, S. (2003). A comparison of vertical force and temporal parameters produced by an in-shoe pressure measuring system and a force platform. Clinical Biomechanics, 15, 781 - 785.

Bregman, A. S. (1993). Auditory scene analysis: Hearing in complex environments. In S. McAdams & E. Bigand (Eds.), Thinking in sound: The cognitive psychology of human audition (pp. 10-36). Oxford, GB: Clarendon Press.

Clarke, E. (1995). Expression in performance: Generativity, perception and semiosis. In J. Rink (Ed.): The practice of performance (pp. 21-54). Cambridge Uni Press.

Curtis, M. E., & Bharucha, J. J. (2010). The minor third communicates sadness in speech, mirroring its use in music. Emotion,10, 335-48.

Drabkin, W. (2002). Heinrich Schenker. In T. Christensen (Ed.), Cambridge history of Western music theory (pp. 812-843). Cambridge: Cambridge University Press.

Eberlein, R. (1994). Die Entstehung der tonalen Klangsyntax. Frankfurt: Lang. Forte, A. (1973/1977). The structure of atonal music. New York: Yale University.

Forte, A., & Gilbert, S. E. (1982). An introduction to Schenkerian analysis. New York: Norton.

Franek, M., van Noorden, L., & Režný, L. (2014). Tempo and walking speed with music in the urban context. Frontiers of Psychology, 5,1361.

Gabrielsson, A. & Lindström Wik, S. (2003). Strong experiences related to music: A descriptive system. Musicae Scientiae, 7, 157-217.

Huron, D. (1994). Interval-class content in equally tempered pitch-class sets: Common scales exhibit optimum tonal consonance. Music Perception, 11, 289-305.

Huron, D. (2001). Tone and voice: A derivation of the rules of voice-leading from perceptual principles. Music Perception, 19, 1-64.

Huron, D. (2002). Music information processing using the Humdrum Toolkit: Concepts, examples, and lessons. Computer Music Journal, 26 (1), 15-30.

Hutchinson, W., & Knopoff, L. (1978). The acoustical component of western consonance. Interface, 7, 1-29.

Juslin, P.N., & Persson, R. S. (2002). Emotional communication. In R. Parncutt & G. E. McPherson (Eds.). The science and psychology of music performance: Creative strategies for teaching and learning (pp. 219-236). New York: Oxford University Press.

Krumhansl, C. L. & Kessler, E. J. (1982). Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys. Psychological Review, 89, 334-368.

Larson, S. (in press). Schenkerian analysis - Pattern, form, and expressive meaning. Prentice Hall.

Lehmann, A. C., Sloboda, J. A., & Woody, R. H. (2006). Psychology for musicians: Understanding and acquiring the skills. Oxford, England: Oxford University Press.

Mastropieri, D., & Turkewitz, G. (1999). Prenatal experience and neonatal responsiveness to vocal expressions of emotion. Developmental Psychobiology, 35, 204-214.

Mayring, P. (2002). Einführung in die qualitative Sozialforschung: Eine Anleitung zu qualitativem Denken (5. Aufl.) Weinheim: Beltz. Muwi: 01:M474

McPherson, G. E. (Ed.) (2006). The child as musician: A handbook of musical development. Oxford, England: Oxford University Press.

Meyer, L. B. (1956). Emotion and meaning in music. Chicago: University of Chicago Press.

Noorden, L. van (1975). Temporal coherence in the perception of tone sequences. Doctoral dissertation, Institute for Perception Research, Eindhoven, NL.

Odam, G., & Bannan, N. (Eds.) (2005). The reflective conservatoire: Studies in music education. London : Guildhall School of Music & Drama.

Painsi, M. (2003). Attribution von Erfolg und Misserfolg bei Musikschülern, deren Eltern und Lehrer. Diplomarbeit, Institut für Psychologie, Uni Graz.

Palmer, C., & Hutchins, S. (2006). What is musical prosody? In B. H. Ross (Ed.), Psychology of Learning and Motivation, 46, 245-278. Amsterdam: Elsevier.

Parncutt, R. (1988). Revision of Terhardt's psychoacoustical model of the root(s) of a musical chord. Music Perception, 6, 65-94.

Parncutt, R. (1993). Pitch properties of chords of octave-spaced tones. Contemporary Music Review, 9, 35-50.

Parncutt, R. (1996). Perceptual underpinnings of analytic techniques: From Rameau to Terhardt, Riemann to Krumhansl, Schenker to Bregman. Paper presentation at Society for Music Theory (Baton Rouge, Louisiana).

Parncutt, R., Sloboda, J. A., Clarke, E. F., Raekallio, M., & Desain, P. (1997 a). An ergonomic model of keyboard fingering for melodic fragments. Music Perception, 14, 341-382.

Parncutt, R. (1999 a). Tonality as implication-realization. In P. Vos and M. Leman (Eds.), Proceedings of the Expert Meeting on Tonality Induction (Nijmegen, Netherlands, '99) (pp. 121-141). Nijmegen: NICI. (See also article in press for 2011 in Music Perception.)

Parncutt, R. (1999 b). Systematic evaluation of the psychological effectiveness of non-conventional notations and keyboard tablatures. In Zannos, I. (Ed.), Music and signs (pp. 146-174). Bratislava, Slovakia: ASCO Art & Science.

Parncutt, R. (2003). Accents and expression in piano performance. In K. W. Niemöller (Ed.), Perspektiven und Methoden einer Systemischen Musikwissenschaft (pp. 163-185). Frankfurt/Main, Germany: Peter Lang.

Parncutt, R. (2004). Enrichment of music theory pedagogy by computer-based repertoire analysis and perceptual-cognitive theory. In J. W. Davidson & H. Eiholzer (Eds.), The music practitioner: Research for the music performer, teacher and listener (pp. 101-116). London, England: Ashgate.

Parncutt, R. (2006 a). Commentary on Mashinter's "Calculating sensory dissonance". Empirical Musicology Review, 1 (4).

Parncutt, R. (2006 b). Prenatal development. In G. E. McPherson (Ed.), The child as musician (pp. 1-31). Oxford, England: Oxford University Press.

Parncutt, R., & Kessler, A. (2006). Musik als virtuelle Person. In R. Flotzinger (Ed.), Musik als... Ausgewählte Betrachtungsweisen (pp. 9-52). Wien: Österreichische Akademie der Wissenschaften.

Parncutt, R., McPherson, G., Painsi, M., & and Zimmer, F. (2006 c). Early acquisition of musical aural skills. Paper at 9th Int. Conf. on Music Perception and Cognition (Bologna, Italy, 21-26 August).

Read, G. (1987). Source book of proposed music notation reforms. New York: Greenwood.

Rink, J. (Ed., 1995). The practice of performance. Studies in musical interpretation. Cambridge: Cambridge University Press.

Rink, J. (Ed.) (2002). Musical performance, A guide to understanding. Cambridge: Cambridge University Press.

Saarikallio, S., & Erkkilä, J. (2007). The role of music in adolescents' mood regulation. Psychology of Music, 35, 88-109.

Schenker, H. (1906). Harmonielehre. Wien: Universal.

Scherer, K. R., Zentner, M. R., & Schacht, A. (2001-02). Emotional states generated by music: An exploratory study of music experts. Musicae Scientiae (Special Issue), 149-171.

Schneider, P., Sluming, V., Roberts, N., Scherg, M., Goebel, R., Specht, H. J., Dosch, H. G., Bleeck, S., Stippich, C., & Rupp, A. (2005). Structural and functional asymmetry of lateral Heschl’s gyrus reflects pitch perception preference. Nature Neuroscience, 8, 1241–1247.

Seither-Preisler, A., Johnson, L., Krumbholz, K., Nobbe, A., Patterson, R., Seither, S., et al. (2007). Tone sequences with conflicting fundamental pitch and timbre changes are heard differently by musicians and nonmusicians. Journal of Experimental Psychology: Human Perception and Performance, 33 (3), 743-751.

Shoji, Y. Takasuka, T. Yasukawa, H. (2004). Personal identification using footstep detection. Proceedings of Intelligent Signal Processing and Communication Systems. Sloboda, J. A. (1991). Music structure and emotional response: Some empirical findings. Psychology of Music, 19, 110-120.

Terhardt, E. (1976). Ein psychoakustisch begründetes Konzept der musikalischen Konsonanz. Acustica, 36, 121–137.

Terhardt, E., Stoll, G., & Seewann, M. (1982). Algorithm for extraction of pitch and pitch salience from complex tonal signals. Journal of the Acoustical Society of America, 71, 679-688.

Tse, P. U., Intriligator, J., Rivest, J., & Cavanagh, P. (2004). Attention and the subjective expansion of time. Perception & Psychophysics, 66(7), 1171-1189.

Vos, P. G. & Leman, M. (2000). Guest editorial: Tonality induction. Music Perception, 17, 401-402.

Williamon, A. (2004). Musical excellence: Strategies and techniques to enhance performance. London, England: Oxford University Press.

Further information relevant for the project on walking and rhythm

Everyday technical innovations often make new experiments possible. Below is an example of a relevant newspaper report. The take-home message is this: Watch out for technical innovations that could enable you to perform a new experiment on a topic that interests you!

INNOVATIONS: These Shoes Are Made for Talking

By Matt Villano

New York Times, November 1, 2006

It was a cold and foggy afternoon the first time that Ulrike Krotscheck's Nike running shoes spoke to her. Ms. Krotscheck, a graduate student in classics at Stanford University, was jogging through Golden Gate Park in San Francisco, and after about 40 minutes of running, she wanted to see how far she had run. So she pushed a button on her iPod Nano. The device instantly sent a wireless electronic request to a battery-powered sensor in the sole of her left shoe. The sensor responded immediately, dispatching the information in a digital voice through her iPod: 5.2 miles. Ms. Krotscheck could hardly believe her earbuds. ''I had gotten used to calculating distances in my head,'' she said. ''The fact that my sneakers were doing it for me was pretty amazing.'' Shoes like these might be the future of fitness. In the cutthroat shoe manufacturing industry, two companies in particular -- Nike and Adidas -- are banking on sensors and other technology to pump up profits and change the notions of high-performance footwear forever. In the last 12 months, both manufacturers have introduced footwear that communicates wirelessly with other technology to provide information about a run. The Nike shoe, called Nike Plus, delivers data on distance and pace. The Adidas product, called adiStar Fusion, offers the same information as well as data about heart rate. This is Adidas's second venture into high-tech sneakers. Last year, the company introduced the Adidas 1, a shoe that uses a battery-powered sensor to identify terrain and analyze a runner's gait, then uses a motor-driven cable system to adjust the cushion levels. If a runner is on a dirt trail that suddenly gets muddy, the heel firms up. If the runner switches to asphalt, the heel expands. Michael Gartenberg, vice president and research director for Jupiter Research, a market research firm in New York, said that while these products were more likely to be popular among technophiles than runners, they should attract interest from all sorts of customers during the holiday season. ''This isn't technology for technology's sake,'' said Mr. Gartenberg, who specializes in personal technology. ''It's technology that truly does enhance the running experience, and I think that's something customers will respond to.'' Each of the latest high-tech sneakers works differently. The Nike Plus grew out of a partnership with Apple, and works in conjunction with the iPod Nano. The system was introduced in May and revolves around the Nike Plus iPod Sport Kit, which is a microchip sensor and receiver. The runner places a quarter-size sensor inside a built-in pocket in the sole of the shoe and attaches a receiver to the bottom of the iPod. Once the sensor is calibrated, the receiver enables the iPod to communicate with the sensor in the shoe. During a run, the sensor collects data on speed and distance. When the runner wants this information, the chip transmits it to the iPod, which interrupts the music to announce a report in a computerized voice. The iPod stores the data, and when a runner docks the device at home, Apple's iTunes software automatically uploads workout information to the Nikeplus.com Web site. Trevor Edwards, Nike's vice president for global brand and category management, said that this feature enabled runners to chart their workouts. ''Most people these days are running with iPods anyway, so this seemed like the perfect way to get the most out of the technology,'' Mr. Edwards said. The system may also help strengthen Nike's bond with its customers. ''With everything from capturing the data to putting it online, this system enables us to connect with our customers like never before,'' he said. The adiStar Fusion achieves a similar result. The shoe, unveiled in October, came about from a partnership with Polar Electro, a Finnish company known for its heart-rate monitors. Like Nike Plus, the Adidas system requires the user to place a microchip in the sole of a shoe. This chip, called the S3 Stride Sensor, made by Polar, transmits speed and distance data to a device called the Polar RS800sd Running Computer, which is worn like a wristwatch. Another device is a heart monitor called the Polar WearLink WIND, which incorporates data about pulse. Runners can clip this sensor to a Polar chest strap, or they can buy a special adiStar Fusion shirt, which works with the sensor to collect heart-rate readings from tiny electrodes sewn into the garment's material. The RS800sd computer compiles all the data and displays it in easy-to-read statistics on the wristwatch. Christian DiBenedetto, program director for intelligent products at Adidas, said that while the information was not delivered in audio, the data about heart rate can help runners in other ways. ''With this feedback during a run, you can better understand your body's performance to give yourself a great opportunity for accomplishing your personal best,'' Mr. DiBenedetto said. Neither system is cheap. The Nike Plus system runs about $300: $100 for the Air Moire or Air Zoom shoes; $29 for the Nike Plus iPod Sport Kit; and $149 for an iPod Nano. The Adidas system costs about $700: $120 for the adiStar Fusion shoe; $65 for the adiStar Fusion shirt; and $489 for the Polar sensor, heart monitor and the running computer. Another drawback of these sneakers is that they are available only in styles that have soles equipped with spots for the sensors. Generally, these shoes have average cushioning and little to no arch support. Gary Muhrcke, who owns Super Runner's Shop in Huntington, N.Y., said that this was a problem because every person's foot is different. Some people need more cushioning, others need more support. Mr. Muhrcke said that wearing the wrong shoes could cause major injury. ''Sneakers are not one style fits all,'' he said. ''If you're a runner with wide feet and you've been running in the same shoes for years, there's no way you're going to cram your feet into one of these shoes just to get some information off a computer.'' Enterprising runners have found ways around this problem. Cindi Raykovich, a co-owner of Sound Sports, a running store in Seattle, said her customers have used the Nike sensor by wearing the technology in a Shoe Pocket, a small walletlike pouch that can be attached to shoelaces. A Nike salesman frowned on this. During a recent visit to a NikeTown store in San Francisco, the salesman said that using the sensor with any other product could affect the readouts' accuracy. The Polar S3 Stride Sensor comes with a hook to be laced on to just about any shoe. Still, the two manufacturers see room for improvements. Mr. Edwards, the Nike vice president, said his company expected to make more shoes compatible with Nike Plus in the months ahead. Mr. DiBenedetto said that Adidas planned to make half its products compatible with Polar technology by 2010. ''We see this as the future,'' Mr. DiBenedetto said. ''Just as the industry accepted midsoles in the 1970s, so, too, will we accept this kind of technology down the road.'' How'm I Running? Sensors Know CONSIDERING I'm a runner and foam-at-the-mouth technophile, it was no surprise that I jumped at the chance to review the latest high-tech footwear: Nike Plus, adiStar Fusion and Adidas 1. Over all, I preferred Nike Plus. The shoes themselves were surprisingly comfortable (my feet usually don't like Nikes), and because the technology revolves around the easy-to-use iPod Nano, I had no trouble figuring it out. During my runs, I appreciated getting audio reports on my performance by pushing a button, though I could have done without the cheesy motivational mantras from the cyclist Lance Armstrong and the Olympic runner Paula Radcliffe. The Adidas adiStar Fusion system was neat but confusing. While the heart-rate readings were impressive, I found the wrist computer tough to program and difficult to decipher midstride. And it required elbow grease to get the sensor into the shoe. Both this sensor and the one in the Nike Plus took a few mile-long runs to calibrate successfully -- a critical step if you want the technology to measure distance accurately. Luckily, once the sensors are calibrated, you don't need to endure the process again. (By the way, both manufacturers say water has no effect on the performance of any of these shoes.) Technologically speaking, the Adidas 1 left the other sneakers behind. The geek in me marveled at the tiny box of gears and motors in the sneaker's midsole, and I spent an entire afternoon running from road to sand, just to feel the heels adjust. Still, from a practical perspective, the Adidas 1 is a dud. Every adjustment eats up battery life, so batteries need regular replacement. Equally perplexing is the price: for $250, it may be more prudent to buy one pair of cross-trainers and another for the road.

Univ.-Prof. Dr.phil.

Richard Parncutt

Univ.-Prof. Dr.phil. Richard Parncutt Centre for Systematic Musicology

Merangasse 70
8010 Graz, Austria

Telefon:+43 316 380 - 8161


Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Zusatzinformationen:

Ende dieses Seitenbereichs.