Multimodal Visibility Deep Learning Model Based on Visible-Infrared Image Pair
-
Graphical Abstract
-
Abstract
In order to enhance the robustness of the visibility deep learning model under a small training da-taset,this paper proposes a multi-modal visibility deep learning model based on visible-infrared image pairs.Apart from conventional visibility deep learning models,the visible-infrared image pairs are used as observation data.First,raw data set is preprocessed to generate visible-infrared image pairs with identical resolution and view range using image registration.Then we construct a new convolutional neural network structure including three CNN streams,which are connected in parallel.The feature maps of each stream are extracted and fused from low layer to deep layer by propagation.Finally,the visibility range level is classi-fied by softmax layer based on the output feature descriptor of full connected layer.The experimental results demonstrate that,compared with conventional visibility deep learning models,both accuracy and robustness are strongly enhanced using the proposed method,especially for small training datasets.
-
-