2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)
Download PDF

Abstract

A K-Winner Machine (KWM) selects among a family of classifiers the specific configuration that minimizes the expected generalization error. In training, KWM uses unsupervised Vector Quantization and subsequent calibration to label data-space partitions. At run time, KWM seeks the largest set of best-matching prototypes agreeing on a test sample, and provides a local-level measure of confidence. The VC-dim of a KWM classifier is worked out exactly; the resulting small values set tight bounds to generalization performance. The method applies to high-dimensional, multi-class problems with large data sets. Experimental results on both a synthetic and a real domain (NIST handwritten numerals) validate confirm the consistency of the theoretical framework.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Similar Articles