On Evaluating Interestingness Measures for Closed Itemsets
There are a lot of measures for selecting interesting itemsets. But which one is better? In this paper we introduce a methodology for evaluating interestingness measures. This methodology relies on supervised classification. It allows us to avoid experts and artificial datasets in the evaluation process. We apply our methodology to evaluate promising measures for itemset selection, such as leverage and stability. We show that although there is no evident winner between them, stability has a slightly better performance.