ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

UM-adapt: Unsupervised multi-task adaptation using adversarial cross-task distillation

Kundu, JN and Lakkakula, N and Radhakrishnan, VB (2019) UM-adapt: Unsupervised multi-task adaptation using adversarial cross-task distillation. In: 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019, 27 Oct-2 Nov. 2019, South Korea, pp. 1436-1445.

[img] PDF
pro_iee_int_con_com_vis_1436-1445_2019.pdf - Published Version
Restricted to Registered users only

Download (344kB) | Request a copy
Official URL: https://dx.doi.org/10.1109/ICCV.2019.00152

Abstract

Aiming towards human-level generalization, there is a need to explore adaptable representation learning methods with greater transferability. Most existing approaches independently address task-transferability and cross-domain adaptation, resulting in limited generalization. In this paper, we propose UM-Adapt - a unified framework to effectively perform unsupervised domain adaptation for spatially-structured prediction tasks, simultaneously maintaining a balanced performance across individual tasks in a multi-task setting. To realize this, we propose two novel regularization strategies; a) Contour-based content regularization (CCR) and b) exploitation of inter-task coherency using a cross-task distillation module. Furthermore, avoiding a conventional ad-hoc domain discriminator, we re-utilize the cross-task distillation loss as output of an energy function to adversarially minimize the input domain discrepancy. Through extensive experiments, we demonstrate superior generalizability of the learned representations simultaneously for multiple tasks under domain-shifts from synthetic to natural environments. UM-Adapt yields state-of-the-art transfer learning results on ImageNet classification and comparable performance on PASCAL VOC 2007 detection task, even with a smaller backbone-net. Moreover, the resulting semi-supervised framework outperforms the current fully-supervised multi-task learning state-of-the-art on both NYUD and Cityscapes dataset.

Item Type: Conference Paper
Publication: Proceedings of the IEEE International Conference on Computer Vision
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: cited By 0; Conference of 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019 ; Conference Date: 27 October 2019 Through 2 November 2019; Conference Code:158036
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 18 Aug 2020 08:32
Last Modified: 18 Aug 2020 08:32
URI: http://eprints.iisc.ac.in/id/eprint/65003

Actions (login required)

View Item View Item