Use of the zero-norm with linear models and kernel methods

Jason Weston, Andre Elisseeff, Bernhard Schoelkopf and Mike Tipping

Abstract

We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, multicategory classification, finding sparse kernel expansions and vector quantization of images.
Download article (This is a longer version than the one accepted for JMLR. It includes some technical appendices and experiments on vector quantization and sparse kernel expansions missing in the JMLR version.)

Data used in the experiments: Home