高级检索

基于NSST的FL-DTNP多模态医学图像超分辨率融合

Super-Resolution Fusion of Multimodal Medical Images Based on NSST and FL-DTNP

  • 摘要: 针对传统动态阈值神经P(DTNP)系统融合过程特征信息丢失、阈值设置不准确和融合过程中分辨率低造成的细节信息丢失的问题, 提出一种基于特征链接模型优化的特征链接DTNP(FL-DTNP)系统多模态医学图像超分辨率融合方法. 该方法采用自适应的链接强度进行图像特征提取, 首先使用双三次插值和非下采样剪切波变换(NSST)对图像进行增强和分解; 然后使用FL-DTNP系统对高频子带的细节和边缘进行信息融合, 对于低频子带, 使用视觉显著加权局部能量-基于八邻域拉普拉斯加权和合并低频子带; 最后使用逆NSST对高频子带和低频子带进行重建. 在Harvard数据集上, 与基于多尺度变换、深度学习和稀疏表示的8种经典的融合方法在7种指标下进行实验, 结果表明, 所提方法具有卓越的性能, 能够生成高质量融合图像.

     

    Abstract: To address the issues of feature information loss, inaccurate threshold settings, and detail information loss due to low resolution during the fusion process of traditional Dynamic Threshold Neural P (DTNP) systems, a multi-modal medical image super-resolution fusion method based on a feature link model optimization, called Feature Link DTNP (FL-DTNP) system, is proposed. This method uses adaptive link strength for image feature extraction. First, bilinear interpolation and Non-Subsampled Shearlet Transform (NSST) are applied to enhance and decompose the image. Then, the FL-DTNP system is used to fuse the detail and edge information of the high-frequency sub-bands. For the low-frequency sub-bands, a visually significant weighted local energy approach—based on an eight-neighborhood Laplacian weighting—is applied to merge the low-frequency sub-bands. Finally, inverse NSST is used to reconstruct the high-frequency and low-frequency sub-bands. Experiments were conducted on the Harvard dataset, comparing the proposed method with eight classic fusion methods based on multi-scale transformations, deep learning, and sparse representation across seven evaluation metrics. The results demonstrate that the proposed method exhibits outstanding performance and is capable of generating high quality fused images.

     

/

返回文章
返回