This work implements the multi exposure image fusion work using convolution neural network (CNN). CNN is used to extract the features from images and fused together to get a uniformly exposed image. We consider the absence of ground truth images for training of CNN, so pretrained networks are used. The code is implementation of this research paper. This paper tested many pretrained networks with different level of layers of CNN. In the conclusion it found the VGG19 is the best network which performed best with feature layer 1.The VGG19 pretrained network for MATLAB is available on mathworks platform. Follow the instructions to install it and once installed check it in command window as:
>> vgg19ans =SeriesNetwork with properties:Layers: [47×1 nnet.cnn.layer.Layer]
VGG19 has 47 layers, but we will use only 1st layer to extract the features. MATLAB has the function to use that.
net=vgg19;features = activations(net,I(:,:,:,ii),1,'OutputAs','channels');
These features are of dimension as of the input image. As per the paper the visibility of pixel and temporal consistency is calculated for weights. For image pixel with more brightness, the weight has less value and vice-versa. This way a uniform image is expected to get. We at https://free-thesis.com also applied post processing steps which improves the results in HSV color map. To compare the exposure of fused images with input, image histogram are plotted. Histogram of Input Images Histogram of Fused Image