next up previous contents
Next: Common Properties and Notational Up: Some Competitive Learning Methods Previous: Contents


In the area of competitive learning a rather large number of models exist which have similar goals but differ considerably in the way they work. A common goal of those algorithms is to distribute a certain number of vectors in a possibly high-dimensional space. The distribution of these vectors should reflect (in one of several possible ways) the probability distribution of the input signals which in general is not given explicitly but only through sample vectors.

In this report we review several methods related to competitive learning. A common terminology is used to make a comparison of the methods easy. Moreover, software implementations of the methods are provided allowing experiments with different data distributions and observation of the learning process. Thanks to the Java programming language the implementations run on a large number of platforms without the need of compilation or local adaptation.

The report is structured as follows: In chapter 2 the basic terminology is introduced and properties shared by all models are outlined. Chapter 3 discusses possible goals for competitive learning systems. Chapter 4 is concerned with hard competitive learning, i.e. models where only the winner for the given input signal is adapted. Chapters 5 and 6 describe soft competitive learning. These models are characterized by adapting in addition to the winner also some other units of the network. Chapter 5 is concerned with models where the network has no fixed dimensionality. Chapter 6 describes models which do have a fixed dimensionality and may be used for data visualization, since they define a mapping from the usually high-dimensional input space to the low-dimensional network structure. The last two chapters still have to be written and will contain quantitative results and a discussion.

Bernd Fritzke
Sat Apr 5 18:17:58 MET DST 1997