Srinath, S and Mitra, S and Rao, S and Soundararajan, R (2024) Learning Generalizable Perceptual Representations for Data-Efficient No-Reference Image Quality Assessment. In: UNSPECIFIED, pp. 22-31.
|
PDF
lea_gen_per_rep_for_dat_eff_no_ref_ima_qua_ass_2024.pdf - Published Version Download (1MB) | Preview |
Abstract
No-reference (NR) image quality assessment (IQA) is an important tool in enhancing the user experience in diverse visual applications. A major drawback of state-of-the-art NR-IQA techniques is their reliance on a large number of human annotations to train models for a target IQA application. To mitigate this requirement, there is a need for unsupervised learning of generalizable quality representations that capture diverse distortions. We enable the learning of low-level quality features agnostic to distortion types by introducing a novel quality-aware contrastive loss. Further, we leverage the generalizability of vision-language models by fine-tuning one such model to extract high-level image quality information through relevant text prompts. The two sets of features are combined to effectively predict quality by training a simple regressor with very few samples on a target dataset. Additionally, we design zero-shot quality predictions from both pathways in a completely blind setting. Our experiments on diverse datasets encompassing various distortions show the generalizability of the features and their superior performance in the data-efficient and zero-shot settings. © 2024 IEEE.
Item Type: | Conference Paper |
---|---|
Publication: | Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024 |
Publisher: | Institute of Electrical and Electronics Engineers Inc. |
Additional Information: | The copyright for this article belongs to authors. |
Department/Centre: | Division of Electrical Sciences > Electrical Communication Engineering |
Date Deposited: | 27 May 2024 05:35 |
Last Modified: | 27 May 2024 05:36 |
URI: | https://eprints.iisc.ac.in/id/eprint/85039 |
Actions (login required)
View Item |