Advanced Search
Jiachang Xu, Haonan Wu, Ding Hong, Shuzhi Su. Super-Resolution Fusion of Multimodal Medical Images Based on NSST and FL-DTNP[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2024-00331
Citation: Jiachang Xu, Haonan Wu, Ding Hong, Shuzhi Su. Super-Resolution Fusion of Multimodal Medical Images Based on NSST and FL-DTNP[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2024-00331

Super-Resolution Fusion of Multimodal Medical Images Based on NSST and FL-DTNP

  • To address the issues of feature information loss, inaccurate threshold settings, and detail information loss due to low resolution during the fusion process of traditional Dynamic Threshold Neural P (DTNP) systems, a multi-modal medical image super-resolution fusion method based on a feature link model optimization, called Feature Link DTNP (FL-DTNP) system, is proposed. This method uses adaptive link strength for image feature extraction. First, bilinear interpolation and Non-Subsampled Shearlet Transform (NSST) are applied to enhance and decompose the image. Then, the FL-DTNP system is used to fuse the detail and edge information of the high-frequency sub-bands. For the low-frequency sub-bands, a visually significant weighted local energy approach—based on an eight-neighborhood Laplacian weighting—is applied to merge the low-frequency sub-bands. Finally, inverse NSST is used to reconstruct the high-frequency and low-frequency sub-bands. Experiments were conducted on the Harvard dataset, comparing the proposed method with eight classic fusion methods based on multi-scale transformations, deep learning, and sparse representation across seven evaluation metrics. The results demonstrate that the proposed method exhibits outstanding performance and is capable of generating high quality fused images.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return