?
Ceteris Paribus Preferences: Prediction via Abduction
We present an approach to preference learning via ceteris paribus preferences (i.e., preferences that hold ``other things being equal'') over attribute subsets. We provide semantics for such preferences based on formal concept analysis and show that ceteris paribus preferences valid in a dataset correspond to implications valid in a certain formal context built from this dataset. Preferences computed from a training dataset can then be used to extend the preference relation to new objects based on the attributes they have, but this approach may require exponential time. However, to compute preferences over new objects induced by the preference theory behind the training dataset, it is not necessary to compute this theory explicitly: an abduction algorithm for Horn formulae represented by their characteristic models can be modified to obtain an algorithm for inducing preferences between a pair of new objects that runs in time polynomial in the size of the training dataset.