Fourth IEEE International Conference on Data Mining (ICDM'04)
Download PDF

Abstract

In the past machine learning algorithms have been successfully used in many problems, and are emerging as valuable data analysis tools. However, their serious practical use is affected by the fact, that more often than not, they cannot produce reliable and unbiased assessments of their predictions' quality. In last years, several approaches for estimating reliability or confidence of individual classifiers have emerged, many of them building upon the algorithmic theory of randomness, such as (historically ordered) transduction-based confidence estimation, typicalness-based confidence estimation, and transductive reliability estimation. Unfortunately, they all have weaknesses: either they are tightly bound with particular learning algorithms, or the interpretation of reliability estimations is not always consistent with statistical confidence levels. In the paper we propose a joint approach that compensates the mentioned weaknesses by integrating typicalness-based confidence estimation and transductive reliability estimation into joint confidence machine. The resulting confidence machine produces confidence values in the statistical sense (e.g., a confidence level of 95% means that in 95% the predicted class is also a true class), as well as provides us with a general principle that is independent of to the particular underlying classifier We perform a series of tests with several different machine learning algorithms in several problem domains. We compare our results with that of a proprietary TCM-NN method as well as with kernel density estimation. We show that the proposed method significantly outperforms density estimation methods, and how it may be used to improve their performance.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles