Neural
Network Applications can be grouped in following categories:
Clustering:
A clustering algorithm explores the similarity between patterns and
places similar patterns in a cluster. Best known applications include
data compression and data mining.
Classification/Pattern recognition:
The task of pattern recognition is to assign an input pattern (like
handwritten symbol) to one of many classes. This category includes
algorithmic implementations such as associative memory.
Function
approximation:
The tasks of function approximation is to find an estimate of the
unknown function f() subject to noise. Various engineering and scientific
disciplines require function approximation.
Prediction/Dynamical
Systems:
The task is to forecast some future values of a time-sequenced data.
Prediction has a significant impact on decision support systems. Prediction
differs from Function approximation by considering time factor.
Here the system is dynamic and may produce different results for the
same input data based on system state (time).
Neural Network types can be classified based on following attributes:
Applications
-Classification
-Clustering
-Function approximation
-Prediction
Connection
Type
- Static (feedforward)
- Dynamic (feedback)
Topology
- Single layer
- Multilayer
- Recurrent
- Self-organized
- . . .
Learning
Methods
- Supervised
- Unsupervised
One of the most important aspects of Neural Network is the learning
process. To describe that process I am going to use a nice analogy
from Thomas Lahore:
The learning process of a Neural Network can be viewed as reshaping
a sheet of metal, which represents the output (range) of the function
being mapped. The training set (domain) acts as energy required to
bend the sheet of metal such that it passes through predefined points.
However, the metal, by its nature, will resist such reshaping. So
the network will attempt to find a low energy configuration (i.e.
a flat/non-wrinkled shape) that satisfies the constraints (training
data).
Learning
can be done in supervised or unsupervised manner.
In
supervised training, both the inputs and the outputs are provided.
The network then processes the inputs and compares its resulting outputs
against the desired outputs. Errors are then calculated, causing the
system to adjust the weights which control the network. This process
occurs over and over as the weights are continually tweaked.
In unsupervised training, the network is provided with inputs but
not with desired outputs. The system itself must then decide what
features it will use to group the input data. This is often referred
to as self-organization or adaption.
Following
geometrical interpretations will demonstrate the learning process
within different Neural Models: