Hard competitive learning (a.k.a. winner-take-all learning) comprises methods where each input signal only determines the adaptation of one unit, the winner. Different specific methods can be obtained by performing either batch or on-line update. In batch methods (e.g. LBG) all possible input signals (which must come from a finite set in this case) are evaluated first before any adaptations are done. This is iterated a number of times. On-line methods, on the other hand (e.g. k-means), perform an update directly after each input signal. Among the on-line methods variants with constant adaptation rate can be distinguished from variants with decreasing adaptation rates of different kinds.
A general problem occurring with hard competitive learning is the possible existence of ``dead units''. These are units which - perhaps due to inappropriate initialization - are never winner for an input signal and, therefore, keep their position indefinitely. Those units do not contribute to whatever the networks purpose is (e.g. error minimization) and must be considered harmful since they are unused network resources. A common way to avoid dead units is to use distinct sample vectors according to to initialize the reference vectors.
The following problem, however, remains: if the reference vectors are initialized randomly according to , then their expected initial local density is proportional to . This may be rather suboptimal for certain goals. For example, if the goal is error minimization and is highly non-uniform, then it is better to undersample the regions with high probability density (i.e., use less reference vectors there than dictated by ) and oversample the other regions. One possibility to adapt the distribution of the reference vectors to a specific goal is the use of local statistical measures for directing insertions and possibly also deletion of units (see sections 5.4, 6.2 and 6.3).
Another problem of hard competitive learning is that different random initializations may lead to very different results. The purely local adaptations may not be able to get the system out of the poor local minimum where it was started. One way to cope with this problem is to change the ``winner-take-all'' approach of hard competitive learning to the ``winner-take-most'' approach of soft competitive learning. In this case not only the winner but also some other units are adapted (see chapters 5 and 6). In general this decreases the dependency on initialization.