Synthetic Computed Tomography Generation via Cycle Bi-Directional Transformer
-
Graphical Abstract
-
Abstract
Magnetic resonance imaging (MRI)-guided radiotherapy allows real-time adjustments to treatment plans based on the threat posed by tumors and organs. It relies primarily on using MRI-generated pseudo-computed tomography (CT) scans for radiotherapy. Currently, pseudo-CT imaging techniques are mainly based on generative adversarial networks. However, this method, using pixel-level loss during training to update network parameters, is prone to mode collapse, resulting in unstable pseudo-CT scans. To accurately generate pseudo-CT scans based on magnetic resonance (MR) images, a medical image synthesis method — the Cycle Bi-directional Transformer is proposed, leveraging the context sensitivity of visual Transformers and the inductive bias of convolutional operators. During the encoding and prediction phase, the Cycle Bi-directional Transformer utilizes a codebook obtained from U-Net encoding to represent images. It shortens the length of the generated codebook using non-autoregressive encoding and vector quantization, thereby producing locally realistic and globally consistent images. Furthermore, this paper employs normalized mutual information as a loss function and introduces cycle consistency loss to address data mismatch issues. A series of experiments on the collected dataset of cranial MRI images validates the effectiveness of the proposed method. The cycle bi-directional Transformer achieves mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) values of 86.3, 25.96 dB, and 0.897, respectively. In comparison with other methods, this approach demonstrates superior performance.
-
-