2015 IIAI 4th International Congress on Advanced Applied Informatics (IIAI-AAI)
Download PDF

Abstract

Sparse representation of signals has been successfully applied in signal processing. Most of existing methods for sparse representation are based on the synthesis model, in which the dictionary is over complete. This paper addresses the dictionary learning and sparse representation with the so-called analysis model. Based on this model, the analysis dictionary multiplying the signals can lead to a sparse outcome. Though this model has been studied in some literatures, there are still less investigations in the context of nonnegative dictionary learning for signal representation. So we focus on nonnegative dictionary learning for signal representation. In this paper, we propose to learn an analysis dictionary from signals using ℓ1-norm as the sparsity measure. In the formulation, we adopt the Euclidean distance as the error measure. Based on these, we present a new algorithm for the nonnegative dictionary learning and sparse representation for signals. Numerical experiments on recovery of analysis dictionary in the noiseless and noisy situation show the effectiveness of the proposed method.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles