top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with Equivariant Neural Network

Modern neural networks (NNs) often do not generalize well in the presence of a “covariate shift”; that is, in situations where the training and test data distributions differ, but the conditional distribution of classification labels given the data remains unchanged. In such cases, NN generalization can be reduced to a problem of learning more robust, domain-invariant features. Domain adaptation (DA) methods include a broad range of techniques aimed at achieving this; however, these methods have struggled with the need for extensive hyperparameter tuning, which then incurs significant computational costs. In this work, we introduce SIDDA, an out-of-the-box DA training algorithm built upon the Sinkhorn divergence, that can achieve effective domain alignment with minimal hyperparameter tuning and computational overhead. We demonstrate the efficacy of our method on multiple simulated and real datasets of varying complexity, including simple shapes, handwritten digits, and real astronomical observations. These datasets exhibit covariate shifts due to noise, blurring, and differences between telescopes. SIDDA is compatible with a variety of NN architectures, and it works particularly well in improving classification accuracy and model calibration when paired with symmetry-aware equivariant neural networks (ENNs). We find that SIDDA consistently enhances the generalization capabilities of NNs, achieving up to a ≈ 40% improvement in classification accuracy on unlabeled target data, while also providing a more modest performance gain of ≲ 1% on labeled source data. We also study the efficacy of DA on ENNs with respect to the varying group orders of the dihedral group , and find that the model performance improves as the degree of equivariance increases. Finally, we find that SIDDA enhances model calibration on both source and target data, with the most significant gains in the unlabeled target domain—achieving over an order of magnitude improvement in the expected calibration error and Brier score. SIDDA’s versatility across various NN models and datasets, combined with its automated approach to domain alignment, has the potential to significantly advance multi-dataset studies by enabling the development of highly generalizable models.

bottom of page