In this paper we study a new framework introduced by (Vapnik 1998) that is an alternative capacity concept to the large margin approach. In the particular case of binary classification, we are given a set of labeled examples, and a collection of "non-examples" that do not belong to either class of interest. This collection, called the Universum, allows one to encode prior knowledge by representing meaningful concepts in the same domain as the problem at hand. We describe an algorithm to leverage the Universum by maximizing the number of observed contradictions, and show experimentally that this approach delivers accuracy improvements over using labeled data alone.
Inference with the Universum.
Jason Weston, Ronan Collobert, Fabian Sinz, Leon Bottou and Vladimir Vapnik.
*ERRATUM* Wuyang Dai and Vladimir Cherkaksy have drawn our attention to their inability to reproduce the WinMac experiment in the ICML paper. They found that their results generally agree for the digits data set, but disagree for the WinMac data set. It may be that the parameter tuning on this dataset was flawed.
ABCDETC dataset used in the experiments