Choosing a discernibility measure for reject-option of individual and multiple classifiers
A novel method for evaluating classification reliability is proposed based on the discernibility of a pattern’s class against other classes from the pattern’s location. Use of three measures of discernibility is experimentally compared with conventional techniques based on the classification scores for class labels. The classification accuracy can be drastically enhanced through discernibility measures by using the most reliable – “elite” – patterns. It can be further boosted by forming an amalgamation of the elites of different classifiers. Improved performance is achieved at the price of rejecting many patterns. There are situations where this price is worth paying – when the non-reliable accuracy rates lead to the need in manually testing of very complex technical devices or in diagnostics of human diseases. Contrary to conventional techniques for estimating reliability, the proposed measures are applicable on small datasets as well as on datasets with complex class structures where conventional classifiers show low accuracy rates.