Publication

Improving Domain Generalization in Contrastive Learning using Adaptive Temperature Control

*Lewis, R. A., *Matton, K., Picard, R., & Guttag, J. (2023, December). Improving Domain Generalization in Contrastive Learning using Domain-Aware Temperature Control. In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models & Neurips 2023 Workshop on Self-supervised Learning (*Equal contribution)

Abstract

Self-supervised pre-training with contrastive learning is a powerful method for learning from sparsely labeled data. However, performance can drop considerably when there is a shift in the distribution of data from training to test time. We study this phenomenon in a setting in which the training data come from multiple domains, and the test data come from a domain not seen at training that is subject to significant covariate shift. We present a new method for contrastive learning that incorporates domain labels to increase the domain invariance of learned rep- resentations, leading to improved out-of-distribution generalization. Our method adjusts the temperature parameter in the InfoNCE loss – which controls the relative weighting of negative pairs – using the probability that a negative sample comes from the same domain as the anchor. This upweights pairs from more similar do- mains, encouraging the model to discriminate samples based on domain-invariant attributes. Through experiments on a variant of the MNIST dataset, we demon- strate that our method yields better out-of-distribution performance than domain generalization baselines. Furthermore, our method maintains strong in-distribution task performance, substantially outperforming baselines on this measure. 

Related Content