Implementation of Deep Convolutional Generative Adversarial Network for Grayscale Image Colorization
Main Article Content
Abstract
The process of adding color to a grayscale image is needed so that improvements to the image can be done quickly and without special knowledge. Image coloring using Deep Convolutional Generative Adversarial Network (DCGAN) and Generative Adversarial Network (GAN) methods. The model training uses the Places365 dataset, which contains 98,721 training data and 6,600 test data. The image is converted into the CIELAB color space, using the L channel as grayscale input and the AB channel as the other input. The test is done by comparing the accuracy values using the Mean Absolute Error (MAE) and Structural Similarity Index Matrix (SSIM) methods. The calculation results of the MAE method show that the average MAE value of the DCGAN method is smaller than the GAN method, with a score of 10.18 and 10.81. The results of the calculation of the SSIM method show that the DCGAN method has a higher average with a score of 91.54% and 68.32% for the GAN method. The results of the questionnaire conducted on 30 respondents showed that the DCGAN method was chosen by more respondents than the GAN method, respectively 88.40% and 11.60%.
Downloads
Download data is not yet available.
Article Details
How to Cite
[1]
M. Ricky and M. E. Al Rivan, “Implementation of Deep Convolutional Generative Adversarial Network for Grayscale Image Colorization”, JuTISI, vol. 8, no. 3, pp. 556 –, Dec. 2022.
Section
Articles
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial used, distribution and reproduction in any medium.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.