The effect of variance-based patch selection on no-reference image quality assessment
		
				
		
			Oral Presentation 			
				
		
				Authors
		
					1kharazmi university
					2Department of Electrical and Computer Engineering, Faculty of Engineering, Kharazmi University, Tehran, Iran
				Abstract
		The objective of the No-Reference Image Quality
Assessment (NR-IQA) is to evaluate the perceived image quality
subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed
many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning
based IQA methods work with tiny image patches and assess the
quality of the entire image based on the average scores of patches.
Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural
details from a scene, we analyzed the effect of feeding informative
and structural patches to the quality framework. In this paper,
a method for structural patch extraction is presented, which is
based on the variance values of each patch. The obtained results
show that the presented method has an acceptable improvement
compared to the random patch selection. The proposed model
has also performed well in cross-dataset experiments on common
distortions, indicating the model’s high generalizability. Additionally, the test was run on the flipped images, and the outcomes
are satisfactory.
		Assessment (NR-IQA) is to evaluate the perceived image quality
subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed
many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning
based IQA methods work with tiny image patches and assess the
quality of the entire image based on the average scores of patches.
Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural
details from a scene, we analyzed the effect of feeding informative
and structural patches to the quality framework. In this paper,
a method for structural patch extraction is presented, which is
based on the variance values of each patch. The obtained results
show that the presented method has an acceptable improvement
compared to the random patch selection. The proposed model
has also performed well in cross-dataset experiments on common
distortions, indicating the model’s high generalizability. Additionally, the test was run on the flipped images, and the outcomes
are satisfactory.
Keywords
		
					Proceeding Title [Persian]
			The effect of variance-based patch selection on no-reference image quality assessment
			Authors [Persian]
			سیدفرهاد حسینی بنویدی
							Abstract [Persian]
				The objective of the No-Reference Image Quality
Assessment (NR-IQA) is to evaluate the perceived image quality
subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed
many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning
based IQA methods work with tiny image patches and assess the
quality of the entire image based on the average scores of patches.
Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural
details from a scene, we analyzed the effect of feeding informative
and structural patches to the quality framework. In this paper,
a method for structural patch extraction is presented, which is
based on the variance values of each patch. The obtained results
show that the presented method has an acceptable improvement
compared to the random patch selection. The proposed model
has also performed well in cross-dataset experiments on common
distortions, indicating the model’s high generalizability. Additionally, the test was run on the flipped images, and the outcomes
are satisfactory.
						Assessment (NR-IQA) is to evaluate the perceived image quality
subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed
many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning
based IQA methods work with tiny image patches and assess the
quality of the entire image based on the average scores of patches.
Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural
details from a scene, we analyzed the effect of feeding informative
and structural patches to the quality framework. In this paper,
a method for structural patch extraction is presented, which is
based on the variance values of each patch. The obtained results
show that the presented method has an acceptable improvement
compared to the random patch selection. The proposed model
has also performed well in cross-dataset experiments on common
distortions, indicating the model’s high generalizability. Additionally, the test was run on the flipped images, and the outcomes
are satisfactory.
Keywords [Persian]
			
			computer vision، image quality assessment، Deep Learning			
			
				
 
 

