2020 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS)
Download PDF

Abstract

In this work, we constructed an improved vgg-16 convolutional neural network model for color constancy, which we called VGGC network structure, to accurately predict scene illumination. VGGC takes a 224×224 image patch as input and works in the spatial structure of the image. It is different from previous methods. there is no manual feature extraction method in VGGC network. The VGGC network is consisted of an input layer, 16 convolutional layers and two fully connected layers. In order to estimate scene illumination more effectively, VGGC network optimizes the learned features. The preliminary experiments of this method on images with spatially varying illumination shows that our VGGC local illumination estimation capability is stable, and the model has better generalization and robustness than the current model using convolutional neural network to predict scene lighting.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles