2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Download PDF

Abstract

Domain adaptation aims to remedy the loss in classification performance that often occurs due to domain shifts between training and testing datasets. This problem is known as the dataset bias attributed to variations across datasets. Domain adaptation methods on Grassmann manifolds are among the most popular, including Geodesic Subspace Sampling and Geodesic Flow Kernel. Grassmann learning facilitates compact characterization by generating linear subspaces and representing them as points on the manifold. However, Grassmannian construction is based on PCA which is sensitive to outliers. This motivates us to find linear projections that are robust to noise, outliers, and dataset idiosyncrasies. Hence, we combine L1-PCA and Grassmann manifolds to perform robust domain adaptation. We present empirical results to validate improvements and robustness for domain adaptation in object class recognition across datasets.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles