Abstract:Recently, the adversarial training framework of maximizing and minimizing the discrepancy between bi-classifier has been proved effective in unsupervised domain adaptation (UDA). Classical UDA approaches usually choose to use some simple intra-class discrepancies to measure the difference between the bi-classifier, such as L-1 norm and Kullback-Leibler divergence. From a geometric point of view, this work designs a novel European dual difference by considering the distribution of dual classifiers in the European Space and combining the defects of the classical dual classifier algorithm, and combines it into this adversarial UDA framework. This novel discrepancy can effectively distinguish the two probabilities predicted by the bi-classifier whether they are close in determinacy or in uncertainty. In addition, we also provide theoretical support to prove the upper bound of the theoretical error of the metric. Experiments on the public UDA dataset show that the average accuracy of the European-style dual adversarial algorithm in the small-scale dataset Digits, the medium-scale dataset Office-31, and the large-scale dataset VisDA are 98.3%, 87.8%, and 81.7%, which outperforms other 2-classifier UDA methods with intra-class variance and achieves results comparable to state-of-the-art methods.