📑 Table of Contents

The winnow algorithm[1] is a technique from machine learning for learning a linear classifier from labeled examples. It is very similar to the perceptron algorithm. However, the perceptron algorithm uses an additive weight-update scheme, while Winnow uses a multiplicative scheme that allows it to perform much better when many dimensions are irrelevant (hence its name winnow). It is a simple algorithm that scales well to high-dimensional data. During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane that can then be used to label novel examples as positive or negative. The algorithm can also be used in the online learning setting, where the learning and the classification phase are not clearly separated.

Algorithm

edit

The basic algorithm, Winnow1, is as follows. The instance space is , that is, each instance is described as a set of Boolean-valued features. The algorithm maintains non-negative weights for , which are initially set to 1, one weight for each feature. When the learner is given an example , it applies the typical prediction rule for linear classifiers:

  • If , then predict 1
  • Otherwise predict 0

Here is a real number that is called the threshold. Together with the weights, the threshold defines a dividing hyperplane in the instance space. Good bounds are obtained if (see below).

For each example with which it is presented, the learner applies the following update rule:

  • If an example is correctly classified, do nothing.
  • If an example is predicted incorrectly and the correct result was 0, for each feature , the corresponding weight is set to 0 (demotion step).
  • If an example is predicted incorrectly and the correct result was 1, for each feature , the corresponding weight multiplied by ฮฑ(promotion step).

A typical value for ฮฑ is 2.

There are many variations to this basic approach. Winnow2[1] is similar except that in the demotion step the weights are divided by ฮฑ instead of being set to 0. Balanced Winnow maintains two sets of weights, and thus two hyperplanes. This can then be generalized for multi-label classification.

Mistake bounds

edit

In certain circumstances, it can be shown that the number of mistakes Winnow makes as it learns has an upper bound that is independent of the number of instances with which it is presented. If the Winnow1 algorithm uses and on a target function that is a -literal monotone disjunction given by , then for any sequence of instances the total number of mistakes is bounded by: .[2]

References

edit
  1. ^ a b Nick Littlestone (1988). "Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm", Machine Learning 285โ€“318(2).
  2. ^ Nick Littlestone (1989). "Mistake bounds and logarithmic linear-threshold learning algorithms". Technical report UCSC-CRL-89-11, University of California, Santa Cruz.

๐Ÿ“š Artikel Terkait di Wikipedia

Multiplicative weight update method

learning (AdaBoost, Winnow, Hedge), optimization (solving linear programs), theoretical computer science (devising fast algorithm for LPs and SDPs), and

List of artificial intelligence algorithms

Weighted majority algorithm (machine learning) Winnow algorithm Backpropagation Conjugate gradient method Generalized Hebbian algorithm Gradient descent

Outline of machine learning

(company) Viterbi algorithm Vowpal Wabbit WACA clustering algorithm WPGMA Ward's method Weasel program Whitening transformation Winnow (algorithm) Winโ€“stay,

List of algorithms

output labels. Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme C3 linearization: an algorithm used primarily

C4.5 algorithm

C4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision

Support vector machine

functional form to SVM Sequential minimal optimization Space mapping Winnow (algorithm) Radial basis function network Cortes, Corinna; Vapnik, Vladimir (1995)

Perceptron

nonlinear solution is overfitted. Other linear classification algorithms include Winnow, support-vector machine, and logistic regression. Like most other

Linear classifier

regression Perceptron Quadratic classifier Support vector machines Winnow (algorithm) Guo-Xun Yuan; Chia-Hua Ho; Chih-Jen Lin (2012). "Recent Advances