2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)
Download PDF

Abstract

Seeing that standard probabilistic Latent Semantic Analysis (pLSA) only handles discrete quantity, pLSA with Gaussian Mixtures (GM-pLSA) is proposed to extend it to continuous feature space by using Gaussian Mixture Model (GMM) to describe the feature distribution under each aspect. However, inheriting from standard pLSA, GM-pLSA still assumes that terms are independent and ignores the intrinsic correlations between them. In this paper, we present a graph regularized GM-pLSA (GRGM-pLSA) model as an extension of GM-pLSA to take advantage of this neglected term correlation information for performance improvement. Appealing to the manifold learning theory, such a useful clue is captured by a graph regularizer and embedded into the process of model learning. In the application of video classification, two kinds of term correlation respectively representing temporal consistency and visual similarity between sub-shots are evaluated. Experimental results show that our proposed GRGM-pLSA outperforms GM-pLSA.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles