Authors

* External authors

Venue

Date

Share

Twofer: Tackling Continual Domain Shift with Simultaneous Domain Generalization and Adaptation

Chenxi Liu*

Lixu Wang

Lingjuan Lyu

Chen Sun*

Xiao Wang*

Qi Zhu*

* External authors

ICLR 2023

2023

Abstract

In real-world applications, deep learning models often run in non-stationary environments where the target data distribution continually shifts over time. There have been numerous domain adaptation (DA) methods in both online and offline modes to improve cross-domain adaptation ability. However, these DA methods typically only provide good performance after a long period of adaptation and perform poorly on new domains before and during adaptation, especially when domain shifts happen suddenly and momentarily. On the other hand, domain generalization (DG) methods have been proposed to improve the model generalization ability on unadapted domains. However, existing DG works are ineffective for continually changing domains due to severe catastrophic forgetting of learned knowledge. To overcome these limitations of DA or DG in tackling continual domain shifts, we propose Twofer, a framework that simultaneously achieves target domain generalization (TDG), target domain adaptation (TDA), and forgetting alleviation (FA). Twofer includes a training-free data augmentation module to prepare data for TDG, a novel pseudo-labeling mechanism to provide reliable supervision for TDA, and a prototype contrastive alignment algorithm to align different domains for achieving TDG, TDA, and FA. Extensive experiments on Digits, PACS, and Domain Net datasets demonstrate that Twofer substantially outperforms state-of-the-art works in Continual DA, Source-Free DA, Test-Time/Online DA, Single DG, Multiple DG, and Unified DA&DG. We envision this work as a significant milestone in tackling continual data domain shifts, with improved performance across target domain generalization, adaptation, and forgetting alleviation abilities.

Related Publications

FedMef: Towards Memory-efficient Federated Dynamic Pruning

CVPR, 2024
Hong Huang, Weiming Zhuang*, Chen Chen, Lingjuan Lyu

Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However, its application on resource-constrained devices is challenging due to the high demand for computation and memory resources for training deep learning models. Neural netw…

DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models

ICLR, 2024
Zhenting Wang, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas*, Shiqing Ma*

Recent text-to-image diffusion models have shown surprising performance in generating high-quality images. However, concerns have arisen regarding the unauthorized data usage during the training or fine-tuning process. One example is when a model trainer collects a set of im…

FedWon: Triumphing Multi-domain Federated Learning Without Normalization

ICLR, 2024
Weiming Zhuang*, Lingjuan Lyu

Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients. Nevertheless, FL encounters challenges due to non-independent and identically distributed (non-i.i.d) data, leading to potential performance degradation and hindered c…

  • HOME
  • Publications
  • Twofer: Tackling Continual Domain Shift with Simultaneous Domain Generalization and Adaptation

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.