Projects per year
Abstract
Musical prosody is characterized by the acoustic variations that make music expressive. However, few systematic and scalable studies exist on the function it serves or on effective tools to carry out such studies. To address this gap, we introduce a novel approach to capturing information about prosodic functions through a citizen science paradigm. In typical bottom-up approaches to studying musical prosody, acoustic properties in performed music and basic musical structures such as accents and phrases are mapped to prosodic functions, namely segmentation and prominence. In contrast, our top-down, human-centered method puts listener annotations of musical prosodic functions first, to analyze the connection between these functions, the underlying musical structures, and acoustic properties. The method is applied primarily to the exploring of segmentation and prominence in performed solo piano music. These prosodic functions are marked by means of four annotation types—boundaries, regions, note groups, and comments—in the CosmoNote web-based citizen science platform, which presents the music signal or MIDI data and related acoustic features in information layers that can be toggled on and off. Various annotation strategies are discussed and appraised: intuitive vs. analytical; real-time vs. retrospective; and, audio-based vs. visual. The end-to-end process of the data collection is described, from the providing of prosodic examples to the structuring and formatting of the annotation data for analysis, to techniques for preventing precision errors. The aim is to obtain reliable and coherent annotations that can be applied to theoretical and data-driven models of musical prosody. The outcomes include a growing library of prosodic examples with the goal of achieving an annotation convention for studying musical prosody in performed music.
Original language | English |
---|---|
Article number | 886570 |
Journal | Frontiers in Psychology |
Volume | 13 |
DOIs | |
Publication status | Published - 21 Jul 2022 |
Keywords
- prosody
- segmentation
- prominence
- annotation
- representation
- music performance
Fingerprint
Dive into the research topics of 'A Perceiver-Centered Approach for Representing and Annotating Prosodic Functions in Performed Music'. Together they form a unique fingerprint.Projects
- 1 Active
-
COSMOS: COSMOS: Computational Shaping and Modeling of Musical Structures
Chew, E. (Primary Investigator)
1/07/2022 → 30/11/2025
Project: Research
-
A framework for modeling performers' beat-to-beat heart intervals using music features and Interpretation Maps
Soliński, M., Reed, C. N. & Chew, E., 4 Sept 2024, In: Frontiers in Psychology. 15, p. 1403599 10 p., 1403599.Research output: Contribution to journal › Article › peer-review
Open Access -
Seeing music's effect on the heart
Chew, E., Fyfe, L., Picasso, C. & Lambiase, P., 1 Nov 2024, In: European Heart Journal. 45, 41, p. 4359–4363 5 p., ehae436.Research output: Contribution to journal › Article › peer-review
-
Adaptive Scattering Transforms for Playing Technique Recognition
Wang, C., Benetos, E., Lostanlen, V. & Chew, E., 7 Mar 2022, In: IEEE/ACM Transactions on Audio, Speech, and Language Processing . 30, p. 1407-1421 15 p.Research output: Contribution to journal › Article › peer-review
Open AccessFile9 Citations (Scopus)108 Downloads (Pure)
-
Open Innovation in Science 2024: Debate: Zeroing in on the scientist: Navigating Dynamic Challengesin the Modern Research Landscape
Chew, E. (Speaker)
22 May 2024Activity: Talk or presentation › Invited talk
-
Journées d'Informatique Musicale (JIM24): Keynote: Music, Mathematics, and the Heart: A mellifluous mélange
Chew, E. (Speaker)
6 May 2024Activity: Talk or presentation › Invited talk
-
The International Conference on AI and Musical Creativity: Keynote: Performer-centered AI and Creativity
Chew, E. (Speaker)
1 Sept 2023Activity: Talk or presentation › Invited talk
Prizes
-
European Research Council (ERC) Advanced Grant
Chew, E. (Recipient), Jun 2019
Prize: Fellowship awarded competitively