70. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V.
70. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V.
Deep Learning-Based Attenuation Correction for FDG PET: Reducing Radiation Exposure and Mitigating Motion Artifacts
2Universitätsklinikum Regensburg, Regensburg, Germany
Text
Introduction: Radiation exposure poses risks to human health, making it essential to minimize patient dose in medical imaging. Positron Emission Tomography (PET) is a powerful tool for cancer diagnosis and staging, but it requires Computed Tomography (CT) for attenuation correction (AC). However, CT introduces additional radiation exposure and may cause errors due to patient motion [1]. Deep learning (DL) offers a promising alternative by generating synthetic AC PET directly from non-attenuation-corrected (NAC) PET images, potentially eliminating the need for CT in some cases. This approach could reduce radiation dose and eliminate motion artefacts, while preserving diagnostic accuracy, making PET imaging safer and more accessible.
Methods: This study utilized a retrospective dataset of 266 subsequent whole-body FDG PET/CT scans, summing to 104,920 slices (256x256), acquired using Siemens Horizon 4R scanner. The dataset included NAC PET, AC PET, and CT scans.
Attenuation correction was performed using low-dose CT-based methods. For model development, the dataset was divided into 239 scans for training and validation and 27 for evaluation.
To generate synthetic AC PET from NAC PET, we employed a U-Net architecture [2] with a ResNet block [3]. Each block consists of a convolution, batch normalization, and ReLU activation. Both contracting and expanding paths have 4 layers. Training was performed using the mean absolute error loss and the Adam optimizer with an initial learning rate of 0.001. Cosine annealing was applied for learning rate scheduling. We trained both 2D and 3D model variants. The former uses batches of 240 axial slices, and the latter patches of size 128×128×128.
Model performance was evaluated using:
- Structural Similarity Index (SSIM)
- Mean Absolute Error (MAE)
- Peak Signal-to-Noise Ratio (PSNR).
Results: Quantitative results are as follows:
- 2D Model:
- Average SSIM: 0.983 ± 0.02
- Average MAE: 0.023 ± 0.02
- Average PSNR: 39.789 ± 5.511
- 3D Model:
- Average SSIM: 0.972 ± 0.026
- Average MAE: 0.145 ± 0.04
- Average PSNR: 42.246 ± 5.76
To further assess voxel-wise activity concentration, we conducted a joint histogram analysis comparing synthetic PET-AC and reference PET-CTAC images. For the 2D and 3D models, the mean slope, correlation coefficient, and R² value were 1.08, 0.964, 0.938 and 0.951, 0.965, 0.935 respectively.
Our method achieves results comparable to or surpassing those of Shiri et al. [4] and Li et al. [5], despite employing a simpler architecture with fewer parameters and training on a smaller dataset. Furthermore, we observe improved performance when using L1 loss over L2 loss, especially in the lung region.
In addition, an experienced nuclear medicine specialist performed a qualitative assessment of the synthetic PET-AC images generated by both models. The evaluation confirmed high diagnostic confidence. Notably, in some cases, synthetic PET-AC images exhibited a reduction in artifacts compared to those obtained using CT-based attenuation correction.
Conclusion: Our results, both quantitatively and qualitatively, demonstrate a high degree of concordance between CT-based and synthetic AC PET images. DL has the potential to replace CT-based methods, reducing radiation exposure for patients. Additionally, it may enhance image quality by mitigating motion-related artifacts.
The authors declare that they have no competing interests.
The authors declare that an ethics committee vote is not required.
Literatur
[1] Mehranian A, Arabi H, Zaidi H. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities. Medical Physics. 2016;43(3):1130-1155. DOI: 10.1118/1.4941014[2] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks forBiomedical Image Segmentation [Preprint]. arXiv. 2015 May. DOI: 10.48550/arXiv.1505.04597
[3] He K, Zhang X, Ren S, and Sun J. Deep Residual Learning for ImageRecognition [Preprint]. arXiv. 2015 Dec 10. DOI: 10.48550/arXiv.1512.03385
[4] Shiri I, Arabi H, Geramifar P, Hajianfar G, Ghafarian P, Rahmim A, Ay MR, and Zaidi H. Deep-JASC: Joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network. European Journal of Nuclear Medicine and Molecular Imaging. 2020;47(11):2533–2548. DOI: 10.1007/s00259-020-04852-5
[5] Li W, Huang Z, Chen Z, Jiang Y, Zhou C, Zhang X, Fan W, Zhao Y, Zhang L, Wan L, Yang Y, Zheng H, Liang D, and Hu Z. Learning CT-free attenuation-corrected total-body PET images through deep learning. European Radiology. 2024;34(9):5578–5587. DOI: 10.1007/s00330-024-10647-1



