In this post, I take a look at self-organising maps.
NN03: Self-organising maps
In this notebook:
- Self-organising maps
To introduce the Self-organising map architecture, first we need to introduce a few new concepts.
Competitive learning
In competitive learning, the neurons compete with each other to determine a winner - after competition, only one neuron will have a positive activition.
Mathematically:
- let m denote the dimension of the input space.
- let x = [x1, x2 … xm] be an input vector.
- Let Wj = [Wj1, Wj2 … Wjm] be the weight vector of the neuron j (j=1,2..n); where n is the total number of neurons in the network).
2D feature map: <img src=”map.png”,width=350,height=350>
Winning criteria The neuron whose weight vector best matches the current input vector wins:
- compare inner product Wjx for j = 1,2,…,n and select the largest.
- Alternatively, minimize the Euclidean distance between Wj and x <img src=”euc.png”,width=150,height=150> …where i(x) identifies the neuron i that best matches input x
Co-operation
Co-operation: the winning neuron locates the centre of the topological neighbourhood of co-operating neurons.
A firing neuron tends to excite the neurons in the immediate neighbourhood more than those far away.
Let hji denote a neighbourhood centred on winning neuron i and containing a set of co-operating neurons, one of which is j.
Let dji denote the distance between winning neuron i and its neighbour j.
- we want hji to be symmetric w.r.t dji
- we want hji to decrease monotonically with increasing dji
- a typical choice is the Gaussian function <img src=”coop.png”,width=300,height=300>
A SOM is biologically plausible as Competition implements lateral inhibition, and Co-operation implements lateral interaction.
Synaptic adaptation
For the network to be self-organising, the weight vectors are required to change according to the input vector. A variation of Hebb’s rule is used: <img src=”hebbb.png”,width=150,height=150> This has the effect of moving the weight vector Wi of winning neuron i towards the input vector x, and the weight vector of neighbouring neurons Wj towards x but less than Wi (with discount hji)
Upon repeated presentations of training data, the weight vectors follow the distribution of the input vectors.
Properties of self-organising maps
- Dimensionality reduction: self organising maps transform input pattern of arbitrary dimension into one or two dimension discrete map.
- Feature extraction: given data from an input space with a non-linear distribution, the SOM is able to select a set of best features for approximating the distribution.
- Topological ordering: the feature map computed by the SOM is topologically ordered in the sense that the spatial location of a neuron in the lattice corresponds to a particular domain or feature of input patterns.
Applications:
- locating similar patterns in maps and satellite images (e.g. meteorology).
- to predict potential areas for good oil prospecting based on geological data.
- organisation and retrieval from large document collection (WEBSOM).
- dimensionality reduction before computer vision tasks (due to its topological ordering qualities: <img src=”hybrid.png”,width=350,height=350>