ML and MAP PET reconstruction with MR-voxel sizes for simultaneous PET-MR

Research output: Chapter in Book/Report/Conference proceedingConference paper

244 Downloads (Pure)

Abstract

The introduction of clinical simultaneous PET-MR scanners has brought new opportunities to use anatomical MR images to assist PET image reconstruction. In this context, MR images are usually downsampled to the PET resolution before being used as anatomical priors in MR-guided PET reconstruction. However, the reconstruction of PET images at the MR-voxel size could achieve a better utilization of the high resolution anatomical information and improve the partial volume correction obtained with these methods. When the PET reconstruction needs to be done in a higher resolution matrix a number of artifacts arise in the image reconstruction, depending on the projector and system matrix used. In this work, we propose a method that modifies the system matrix to overcome these difficulties and we show reconstructed images of a NEMA phantom and patient data for standard and high resolution image sizes. The higher resolution reconstructed images show a better delineation of the edges and a modest improvement of the contrast in the smallest spheres of the NEMA phantom. In addition, we evaluated the method for MR-guided MAP reconstruction, where patient data was reconstructed using a Bowsher prior computed from the T1-weighted image in its original resolution. The reconstructed images with MR-voxel sizes showed a better definition of the structures of the brain and quantitatively better contrast in the striatum, showing that MR-guided MAP reconstruction with MR-voxel size can enhance the partial volume correction.
Original languageEnglish
Title of host publication2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)
Publication statusAccepted/In press - 2017

Fingerprint

Dive into the research topics of 'ML and MAP PET reconstruction with MR-voxel sizes for simultaneous PET-MR'. Together they form a unique fingerprint.

Cite this