Abstract
A cardiac motion atlas provides a space of reference in which the cardiac motion fields of a cohort of subjects can be directly compared. From such atlases, descriptors can be learned for subsequent diagnosis and characterization of disease. Traditionally, such atlases have been formed from imaging data acquired using a single modality. In this work we propose a framework for building a multimodal cardiac motion atlas from MR and ultrasound data and incorporate a multiview classifier to exploit the complementary information provided by the two modalities. We demonstrate that our novel framework is able to detect non ischemic dilated cardiomyopathy patients from ultrasound data alone, whilst still exploiting the MR based information from the multimodal atlas. We evaluate two different approaches based on multiview learning to implement the classifier and achieve an improvement in classification performance from 77.5% to 83.50% compared to the use of US data without the multimodal atlas.
Original language | English |
---|---|
Pages (from-to) | 3-11 |
Number of pages | 9 |
Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Volume | 10663 LNCS |
Early online date | 15 Mar 2018 |
DOIs | |
Publication status | E-pub ahead of print - 15 Mar 2018 |
Event | 8th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2017, Held in Conjunction with MICCAI 2017 - Quebec City, Canada Duration: 10 Sept 2017 → 14 Sept 2017 |
Keywords
- Classification
- Multimodal cardiac motion atlas
- Multiview dimensionality reduction