Projects per year
Abstract
Segmenting continuous sensory input into coherent segments and subsegments is an important part of perception. Music is no exception. By shaping the acoustic properties of music during performance, musicians can strongly influence the perceived segmentation. Two main techniques musicians employ are the modulation of tempo and dynamics. Such variations carry important information for segmentation and lend themselves well to numerical analysis methods. In this article, based on tempo or loudness modulations alone, we propose a novel end-to-end Bayesian framework using dynamic programming to retrieve a musician's expressed segmentation. The method computes the credence of all possible segmentations of the recorded performance. The output is summarized in two forms: as a beat-by-beat profile revealing the posterior credence of plausible boundaries, and as expanded credence segment maps, a novel representation that converts readily to a segmentation lattice but retains information about the posterior uncertainty on the exact position of segments’ endpoints. To compare any two segmentation profiles, we introduce a method based on unbalanced optimal transport. Experimental results on the MazurkaBL dataset show that despite the drastic dimension reduction from the input data, the segmentation recovery is sufficient for deriving musical insights from comparative examination of recorded performances. This Bayesian segmentation method thus offers an alternative to binary boundary detection and finds multiple hypotheses fitting information from recorded music performances.
Original language | English |
---|---|
Number of pages | 16 |
Journal | Music & Science |
Volume | 7 |
DOIs | |
Publication status | Published - 24 Mar 2024 |
Keywords
- expressive performance
- segmentation
- bayesian segmentation
- dynamic programming
- optimal transport
- musical interpretation
- comparative analysis
- computational algorithm
Fingerprint
Dive into the research topics of 'End-to-End Bayesian Segmentation and Similarity Assessment of Performed Music Tempo and Dynamics without Score Information'. Together they form a unique fingerprint.Projects
- 1 Active
-
COSMOS: COSMOS: Computational Shaping and Modeling of Musical Structures
Chew, E. (Primary Investigator)
1/07/2022 → 30/11/2025
Project: Research
-
A Computational Method for Empirically Validating Synchronisation Between Musical Phrase Arcs and Autonomic Variables
Cotic, N., Solinski, M., Pope, V., Lambiase, P. & Chew, E., 8 Sept 2024, (Accepted/In press) A Computational Method for Empirically Validating Synchronisation Between Musical Phrase Arcs and Autonomic Variables. IEEE XploreResearch output: Chapter in Book/Report/Conference proceeding › Conference paper › peer-review
-
Characterizing and Interpreting Music Expressivity through Rhythm and Loudness Simplices
Lascabettes, P., Chew, E. & Bloch, I., Oct 2023, Proceedings of the International Computer Music Conference. Shenzhen, China, Vol. 47. 8 p.Research output: Chapter in Book/Report/Conference proceeding › Conference paper › peer-review
Open AccessFile -
COSMOS: Computational Shaping and Modeling of Musical Structures
Chew, E., 27 May 2022, In: Frontiers in Psychology. 13, 527539.Research output: Contribution to journal › Article › peer-review
Open Access2 Citations (Scopus)
Activities
- 1 Invited talk
-
Journées d'Informatique Musicale (JIM24): Keynote: Music, Mathematics, and the Heart: A mellifluous mélange
Chew, E. (Speaker)
6 May 2024Activity: Talk or presentation › Invited talk