Deep learning analysis of multi-modal neonatal MRI

Student thesis: Doctoral ThesisDoctor of Philosophy

Abstract

Analysis of magnetic resonance images (MRI) of the neonatal brain comes with unique challenges. The rapid development results in changes in both shape and appearance of the neonatal brain scanned at different post-menstrual weeks. These changes affect outputs of image analysis tools, such as image registration or segmentation, making interpretation of the results difficult.

The aim of this PhD project is to develop deep learning image segmentation and registration tools to address challenges in analysing the developing neonatal brain MRI. While accurate segmentation of neonatal brain MRI has been achieved by existing classical segmentation techniques, these are sensitive to MRI acquisition protocols, making volumetric comparisons between subjects from different studies unreliable. I therefore propose harmonised deep learning-based segmentation for neonatal MRI. At the same time, traditional medical image registration methods can be misguided by the rapid MRI contrast changes due to ongoing brain tissue maturation in the first weeks of life. To alleviate this problem, I propose a multi- channel attention based deep learning registration approach that selects the most salient features from multiple image modalities to improve alignment of individual MR images to a common atlas space.

As a prerequisite for the contributions, the first chapter introduces the neonatal brain, and describes the main MRI modalities, as well as two neonatal datasets, which were utilised in this thesis. The second and third chapters lay the groundwork for the methods used throughout this thesis, with a focus on classical and deep learning image registration and segmentation algorithms. A survey of state-of-the- art deep learning based medical image registration and segmentation techniques follows, with the aim of presenting some of the baseline models used throughout this thesis, as well as more advanced techniques, such as unsupervised domain adaptation and visual attention.

The three novel chapters of the thesis describe my contributions. First, I investigated deep learning domain adaptation algorithms to suppress the domain shift between a target and a source dataset, thus making it feasible to predict on unseen data distributions. My proposed image-space domain adaptation model combined with data augmentation achieved the best solution for harmonising tissue segmentation maps of two neonatal datasets. I have shown that there were no significant differences in tissue volumes and cortical thickness measures derived from the harmonised segmentations on a subset of the datasets matched for gestational age at birth and postmenstrual age at scan. Second, I developed a novel attention-based deep learning multi-channel registration model that learns spatially varying attention maps needed to fuse different modalities, thus taking advantage of their complementary nature. I applied the technique to align multi-channel datasets composed of structural T2-weighted (T2w) MRI and fractional anisotropy (FA) maps derived from diffusion MRI to the atlas space. The quantitative evaluation confirmed that while cortical structures were better aligned using T2w data and white matter tracts were better aligned using FA maps, the attention-based multi-channel registration aligned both types of structures accurately. Finally, I expanded the registration model from the previous chapter to align multi-channel data composed from structural T2w MRI and diffusion tensor maps into atlas space, which further improved alignment of white matter tracts.

In my PhD thesis I proposed solutions to tackle some of the challenges in analysis of the neonatal MRI, when the developing brain changes both shape and MRI tissue contrast as it grows. The techniques will support accurate image segmentation independent of the acquisition protocol and multi-channel registration to atlas space, that can take advantage of different information content of various MRI modalities. These techniques will help to improve reliability and interpretability of downstream neuroimaging analyses.
Date of Award1 Sept 2023
Original languageEnglish
Awarding Institution
  • King's College London
SupervisorMaria Deprez (Supervisor) & Marc Modat (Supervisor)

Cite this

'