Abstract
In the traditional way of learning from examples of objects, the classifiers are built in a feature space. However, alternative ways can be found by constructing decision rules on dissimilarity (distance) representations, instead. In such a recognition process, a new object is described by its distances to (a subset of) the training samples. In this paper a number of methods to tackle this type of classification problem are investigated: the feature-based (i.e. interpreting the distance representation as a feature space) and rank-based (i.e. considering the given relations) decision rules. The experiments demonstrate that the feature-based (especially normal-based) classifiers often outperform the rank-based ones. This is to be expected, since summation-based distances are, under general conditions, approximately normally distributed. In addition, the support vector classifier achieves also a high accuracy.