高级检索
程皓楠, 李思佳, 刘世光. 深度跨模态环境声音合成[J]. 计算机辅助设计与图形学学报, 2019, 31(12): 2047-2055. DOI: 10.3724/SP.J.1089.2019.17906
引用本文: 程皓楠, 李思佳, 刘世光. 深度跨模态环境声音合成[J]. 计算机辅助设计与图形学学报, 2019, 31(12): 2047-2055. DOI: 10.3724/SP.J.1089.2019.17906
Cheng Haonan, Li Sijia, Liu Shiguang. Deep Cross-Modal Synthesis of Environmental Sound[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(12): 2047-2055. DOI: 10.3724/SP.J.1089.2019.17906
Citation: Cheng Haonan, Li Sijia, Liu Shiguang. Deep Cross-Modal Synthesis of Environmental Sound[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(12): 2047-2055. DOI: 10.3724/SP.J.1089.2019.17906

深度跨模态环境声音合成

Deep Cross-Modal Synthesis of Environmental Sound

  • 摘要: 随着计算机图形学技术的不断发展,用户对视频及动画的声音质量提出了更高的要求.针对现有方法中存在的算法复杂度高,可扩展性不强等问题,提出一种基于CGAN和SampleRNN的深度学习的环境声音合成算法,采用VGG网络模型提取视频深度特征.并将视频深度特征通过一个时序同步网络模型,实现具有更高同步性的视频到音频的跨模态特征转换;通过音色增强网络模型对合成声音的音色进行增强,以提高网络结构的可扩展性,并得到最终与视频同步的、真实感较强的环境声.通过对音视频跨模态数据集中12类不同类别视频进行训练与测试,结果的主观与客观评价表明,文中算法所生成的结果真实感强,提高了现有算法的可扩展性.

     

    Abstract: With the continuous development of computer graphics technology,users put forward higher requirements for accompanied sound of video and animation.Aiming at the problem that current methods usually are high complexity and poor scalability,this paper proposed a novel deep environment sound synthesis algorithm which is based on generative adversarial network and sample recurrent neural network.First,the deep features of the video are extracted based on the visual geometry group network model.Then,a novel synchronous sequential network model is proposed to realize the cross-modal feature transformation with higher synchronization rate from visual to audio.Finally,the generated sound is enhanced through the timbre enhancement network model for scalability improvement.Through training and testing 12 different types of video in the audio-video cross-modal data set,the subjective and objective evaluation of the results shows that the generated results are realistic and the proposed method is scalable.

     

/

返回文章
返回