Conditional Probability Distribution Divergence Reduction in Visual Domain Adaptation

Conditional Probability Distribution Divergence Reduction in Visual Domain Adaptation

Elham Hatefi, Hossein Karshenas, Peyman Adibi

Abstract

The rapid evolution of data has challenged traditional machine learning methods and leads to the failure of many learning models. As a possible solution to the lack of sufficient labeled data, transfer learning aims to exploit the accumulated knowledge in the auxiliary domain to develop new predictive models. This article studies a specific type of transfer learning called domain adaptation, which works based on subspace learning in order to minimize the distance between class conditional probability distributions of source and target domains and to preserve source discriminative information. Efficient classifiers trained on source domain data have been used to predict target domain data labels to facilitate subspace learning. In this work, subspace learning is formulated as an optimization problem and experiments have been carried out on real-world datasets. The results of experiments indicate that the proposed method outperforms several existing methods in terms of accuracy on three datasets: Office-Caltech10, Office, and SS5 datasets.

Keywords

Transfer Learning, Domain Adaptation, Class Conditional Probability Distribution

References

[1] S.J.Pan, Q.Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng, vol. 22, no. 10 , pp. 1345–1359, 2010. [2] J. Zhang, et al., “Recent advances in transfer learning for cross-dataset visual recognition: A Problem-Oriented Perspective,” ACM Computing Surveys (CSUR),vol. 52, no. 1, pp. 7, 2019. [3] S.Pan, I.W.Tsang, J.T.Y.Kwok, and Q.Yang, “Domain adaptation via transfer component analysis,” In Proceedings of the International Joint Conference on Artificial Intelligence , 2009. [4] M.Sugiyama, S.Nakajima, H.Kashima, P.V. Buenau, and M.Kawanabe, “Direct importance estimation with model selection and its application to covariate shift adaptation,” In Proceedings of the Advances in Neural Information Processing Systems, pp. 1433–1440, 2008. [5] M.Baktashmotlagh, M.T.Harandi, B.C. Lovell, and M. Salzmann, “Domain adaptation on the statistical manifold,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp. 2481–2488, 2014. [6] S.Si, D.Tao, and B.Geng, “Bregman divergence-based regularization for transfer subspace learning,” IEEE Trans. Knowl. Data Eng, vol. 22, no. 7 , pp. 929–942, 2010. [7] Y.Shi and F.Sha, “Information-theoretical learning of discriminative clusters for unsupervised domain adaptation,” In Proceedings of the International Conference on Machine Learning. pp. 1079–1086, 2012. [8] R.K.Sanodiya, and J. Mathew, “A framework for semi- supervised metric transfer learning on manifolds,” Knowledge-Based Systems, pp. 1-14, 2019. [9] S.Herath, M. Harandi, and F. Porikli. “Learning an invariant hilbert space for domain adaptation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. [10] B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation, ” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2066– 2073,2012. [11] S.Motiian, M.Piccirilli, D.A. Adjeroh, and G.Doretto, “Unified deep supervised domain adaptation and generalization, ” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5715–5725, 2017. [12] M.Long and J.Wang, “Learning transferable features with deep adaptation networks, ” In Proceedings of the International Conference on Machine Learning, pp. 97–105, 2015. [13] M.Long, H.Zhu, J.Wang, and M.I. Jordan, “Unsupervised domain adaptation with residual transfer networks, ” In Advances in Neural Information Processing Systems, MIT Press, pp. 136–144, 2016. [14] H.Venkateswara, J.Eusebio, S.Chakraborty, and S.Panchanathan, “Deep hashing network for unsupervised domain adaptation, ” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027, 2017. [15] P.Wei, Y.Ke, and C.K.Goh, “Deep nonlinear feature coding for unsupervised domain adaptation, ” In Proceedings of the International Joint Conferences on Artificial Intelligence, 2016. [16] I.Goodfellow, J.Pouget-Abadie, M.Mirza, B.Xu, D.Warde-Farley, S.Ozair, A.Courville, and Y.Bengio, “Generative adversarial nets, ” In Advances in Neural Information Processing Systems. MIT Press, pp. 2672–2680, 2014. [17] E.Tzeng, J.Hoffman, K.Saenko, and T.Darrell. 2017, “Adversarial discriminative domain adaptation, ” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [18] K. Saenko, B. Kulis, M. Fritz, and T.Darrell, “Adapting visual category models to new domains”, In Proceedings of the European Conference on Computer Vision. Springer, pp. 213–226, 2010. [19] J.Xu, S.Ramos, D.Vazquez, and A. M. Lopez, “Domain adaptation of deformable partbased models, ” IEEE Trans. Pattern Anal. Mach. Intell, vol. 36, no. 12, pp. 2367–2380, 2014. [20] L.Duan, D.Xu, and I.W.Tsang, “Domain adaptation from multiple sources: A domain-dependent regularization approach, ” IEEE Trans. Neural Netw. Learn. Syst, vol. 23, no. 3, pp. 504–518, 2012. [21] W.Dai, Q.Yang, G.Rong Xue, and Y.Yu, “Boosting for transfer learning, ” In Proceedings of the International Conference on Machine Learning. ACM, pp. 193–200, 2007. [22] Q.Wang, P. Bu, and T.P. Breckon, “Unifying unsupervised domain adaptation and zero-shot visual recognition, ” arXiv preprint arXiv:1903.10601, 2019. [23] P.Koniusz, Y.Tas, and F.Porikli, “Domain adaptation by mixture of alignments of second-or higher-order scatter tensors, ” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [24] L.Duan, I. W. Tsang, and Dong Xu, “Domain transfer multiple kernel learning, ” IEEE Trans. Pattern Anal. Mach. Intell, vol. 34, no. 3, pp. 465–479, 2012. [25] L.Duan, D. Xu, I. W.Tsang, and Jiebo Luo, “Visual event recognition in videos by learning from web data, ” IEEE Trans. Pattern Anal. Machine Intell. vol. 34, no. 9, pp. 1667– 1680, 2012. [26] J.Zhang, W.Li, and P.Ogunbona, “Joint geometrical and statistical alignment for visual domain adaptation, ” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [27] J.Ye, R. Janardan, and Q. Li. “Two-dimensional linear discriminant analysis, ” In Advances in neural information processing systems. 2005. [28] T.Hofmann, B. Schölkopf, and A.J. Smola, “Kernel methods in machine learning, ” The annals of statistics, pp. 1171-1220, 2008. [29] R.Gopalan, R.Li, and R.Chellappa. “Domain adaptation for object recognition:An unsupervised approach, ” In proc. Int. Conference on Computer Vision(ICCV), pp. 999-1006, 2011. [30] H.Lu, et al., “An embarrassingly simple approach to visual domain adaptation, ” IEEE Transactions on Image Processing, vol. 27, no. 7 , pp. 3403-3417, 2018 [31] V. Risojevi ́c and Z. Babi ́c, “Aerial image classification using structural texture similarity,” in Proc. IEEE Int. Symp. Signal Process. Inf. Technol.(ISSPIT), pp. 190–195, 2011. [32] Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” in Proc. SIGSPATIAL Int. Conf. Adv. Geograph. Inf. Syst, pp. 270– 279, 2010. [33] D. Dai and W. Yang, “Satellite image classification via two-layer sparse coding with biased image representation,” IEEE Geosci. Remote Sens. Lett, vol. 8, no. 1, pp. 173–176, 2011. [34] K.Chatfiel, K.Simonyan, A.Vedaldi, and A.Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” in Proc. Brit. Mach. Vis. Conf. (BMVC), , pp.1-12 , 2014. [35] H.Bay, T. Tuytelaars, and L. Van Gool. “Surf: Speeded up robust features, ” in European conference on computer vision. 2006. [36] E.Hatefi, H.Karshenas, P .Adibi, “Subspace learning augmented with class conditional probability estimation based on SVM classifier in domain adaptation, ” 25th International Computer Conference, Computer Society of Iran(CSICC), 2020.