Kohonen Self- Organizing Feature Map



  • Kohonen Self-Organizing feature Map (SOM) refers to a neural network, which is trained using competitive learning. Competitive learning implies that the competition process takes place before the cycle of learning. The competition process suggests that some criteria select a winning processing element. After the winning processing element is chosen, its weight vector is adjusted according to the used learning law (Hecht Nielsen 1990).
  • The self-organizing map makes topologically ordered mappings between input file and processing elements of the map. Topological ordered implies that if two inputs are of comparable characteristics, the active processing elements answering to inputs that are located closed to every other on the map. The weight vectors of the processing elements are organized in ascending to descending order. Wi < Wi+1 for all values of i or Wi+1 for all values of i (this definition is valid for one-dimensional self-organizing map only).
  • The self-organizing map is typically represented as a two-dimensional sheet of processing elements described within the figure given below. Each processing element has its own weight vector, and learning of SOM (self-organizing map) depends on the difference of those vectors.
  • The processing elements of the network are made competitive during a self-organizing process, and specific criteria pick the winning processing element whose weights are updated. These criteria are used to limit the Euclidean distance between the input vector and therefore the weight vector SOM (self-organizing map) varies from basic competitive learning in order that instead of adjusting only the weight vector of the winning processing element also weight vectors of neighboring processing elements are adjusted The dimensions of the neighborhood are essentially making the rough ordering of SOM and size is diminished as time goes on. At last, only a winning processing element is adjusted, making the fine-tuning of SOM possible. The utilization of neighborhood makes topologically ordering procedure possible, and together with competitive learning makes process non-linear.
  • It is discovered by Finnish professor and researcher Dr. Teuvo Kohonen in 1982. The self-organizing map refers to an unsupervised learning model proposed for applications during which maintaining a topology between input and output spaces. The notable attribute of this algorithm is that the input vectors that are close and similar in high dimensional space also are mapped to close by nodes within the 2D space. it's fundamentally a method for dimensionality reduction, because it maps high-dimension inputs to a coffee dimensional discretized representation and preserves the basic structure of its input space.
 Kohonen Self

Kohonen Self

  • All the entire learning process occurs without supervision because the nodes are self-organizing. They’re also referred to as feature maps, as they're basically retraining the features of the input file, and easily grouping them as indicated by the similarity between one another. It's practical value for visualizing complex or huge quantities of high dimensional data and showing the connection between them into a coffee, usually two-dimensional field to see whether the given unlabeled data have any structure thereto.
  • A self-Organizing Map (SOM) varies from typical artificial neural networks (ANNs) both in its architecture and algorithmic properties. Its structure consists of one layer linear 2D grid of neurons, rather than a series of layers. All the nodes on this lattice are associated on to the input vector, but to not one another. It means the nodes don't know the values of their neighbors, and only update the weight of their associations as a function of the given input. The grid itself is the map that coordinates itself in iteration as a function of the input file. As such, after clustering, each node has its own coordinate (i.j), which enables one to calculate Euclidean distance between two nodes by means of the Pythagoras theorem.
  • A Self-Organizing Map utilizes competitive learning rather than error-correction learning, to modify its weights. It implies that only a private node is activated at each cycle during which the features of an event of the input vector are introduced to the neural network, as all nodes compete for the privilege to respond to the input.
  • The selected node- the simplest Matching Unit (BMU) is chosen according to the similarity between the present input values and every one the opposite nodes within the network. The node with the fractional Euclidean difference between the input vector, all nodes, and its neighboring nodes is chosen and within a specific radius, to possess their position slightly adjusted to coordinate the input vector. By experiencing all the nodes present on the grid, the whole grid eventually matches the entire input dataset with connected nodes gathered towards one area, and dissimilar ones are isolated.
 Kohonen Self1

Kohonen Self Organizing feature map

Algorithm

Step 1

  • Each node weight w_ij initialize to a random value.

Step 2

  • Choose a random input vector x_k.

Step 3

  • Repeat steps 4 and 5 for all nodes on the map.

Step 4

  • Calculate the Euclidean distance between weight vector wij and the input vector x(t) connected with the first node, where t, i, j =0.

Step 5

  • Track the node that generates the smallest distance t.

Step 6

  • Calculate the overall Best Matching Unit (BMU). It means the node with the smallest distance from all calculated ones.

Step 7

  • Discover topological neighborhood βij(t) its radius σ(t) of BMU in Kohonen Map.

Step 8

  • Repeat for all nodes in the BMU neighborhood: Update the weight vector w_ij of the first node in the neighborhood of the BMU by including a fraction of the difference between the input vector x(t) and the weight w(t) of the neuron.

Step 9

  • Repeat the complete iteration until reaching the selected iteration limit t=n.
  • Here, step 1 represents initialization phase, while step 2 to 9 represents the training phase.

Where;

  • t = Current iteration.
  • i = Row coordinate of the nodes grid.
  • J = Column coordinate of the nodes grid.
  • W = Weight vector
  • w_ij = Association weight between the nodes i,j in the grid.
  • X = Input vector
  • X(t)= The input vector instance at iteration t
  • β_ij = The neighborhood function, decreasing and representing node i,j distance from the BMU.
  • σ(t) = The radius of the neighborhood function, which calculates how far neighbor nodes are examined in the 2D grid when updating vectors. It gradually decreases over time.

If you want to learn about Artificial Intelligence Course , you can refer the following links Artificial Intelligence Training in Chennai , Machine Learning Training in Chennai , Python Training in Chennai , Data Science Training in Chennai.



Related Searches to Kohonen Self- Organizing Feature Map