Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks
Oral Presentation XML
Authors
1Department of Computer Engineering, University of Kurdistan, Sanandaj, Iran
2Department of Computer Engineering University of Kurdistan Sanandaj, Iran
Abstract
The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.
Keywords
 
Proceeding Title [Persian]
Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks
Authors [Persian]
Abstract [Persian]
The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.
Keywords [Persian]
dusty image enhancement، GANs، adversarial learning، Pix2Pix، self-Supervised learning، dehazing