AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


PET Image Reconstruction using Convolutional Neural Network and Generative Adversarial Network in Sinogram Domain

Author

Term

4. term

Publication year

2019

Submitted on

Pages

10

Abstract

Billeddannelse med positron-emissionstomografi (PET) kan have støj og artefakter, når tracer-dosen sænkes, eller når dele af de rå målinger mangler (fx manglende pixels eller hele projektioner). I scanneren lagres disse rå data som et sinogram, en repræsentation af mange projektioner, der senere omdannes til et billede. Vi sammenlignede to deep learning-tilgange—Convolutional Neural Networks (CNN) og Generative Adversarial Networks (GAN)—til at rekonstruere manglende data i PET-sinogrammer. Vores end-to-end forløb konverterede PET-billeder til sinogramdomænet med Radon-transformen, lod en model udfylde de manglende værdier og rekonstruerede derefter billeder med filtreret bagprojektion. CNN-modellen var et encoder–decoder-netværk med skip-forbindelser, og vi brugte en trinvis træningsstrategi, hvor tidligere trænede vægte blev genbrugt for at håndtere mere korrupte sinogrammer. GAN-tilgangen brugte en lignende generator samt en diskriminator, der lærte at skelne mellem ægte og genererede sinogrammer. GAN klarede sig en smule bedre end CNN ved udfyldning af manglende pixels på tværs af fem korruptionsniveauer (gennemsnitlig PSNR 42,34 vs. 41,44; SSIM 0,983 vs. 0,977; PSNR og SSIM er standardmål for billedkvalitet). Når hele projektioner manglede, var forskellen større (PSNR 46,84 vs. 40,13; SSIM 0,989 vs. 0,866). Selv når 90% af sinogramdataene manglede, gav GAN skarpere og mere detaljerede rekonstruktioner end CNN. Forskelle i netværksarkitektur og målfunktioner kan forklare, at GAN præsterede bedre. På trods af begrænsninger er resultaterne lovende og motiverer til videre eksperimenter.

Positron Emission Tomography (PET) images can contain noise and artifacts when the radiotracer dose is lowered or when parts of the raw scan data are missing (for example, missing pixels or entire projection angles). In scanners, these raw measurements are stored as a sinogram, a representation of many line-of-sight projections that is later turned back into an image. We compared two deep learning approaches—Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs)—for reconstructing missing data in PET sinograms. Our end-to-end pipeline converted PET images to the sinogram domain using the Radon transform, had a model fill in the missing values, and then reconstructed images with filtered back projection. The CNN was an encoder–decoder with skip connections, and we used a progressive training strategy that loaded previously trained weights to handle increasingly corrupted sinograms. The GAN used a similar generator plus a discriminator that learned to tell real from generated sinograms. GANs performed slightly better than CNNs when filling in missing pixels across five corruption levels (average PSNR 42.34 vs 41.44; SSIM 0.983 vs 0.977; PSNR and SSIM are standard measures of image quality). When entire projections were missing, the gap was larger (PSNR 46.84 vs 40.13; SSIM 0.989 vs 0.866). With 90% of the sinogram data removed, GANs produced sharper and more detailed reconstructions than CNNs. Differences in network architectures and training objectives likely explain why GANs performed better. Despite limitations, the results are promising and motivate further experiments.

[This abstract was generated with the help of AI]