You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information.
|
|
Taxonomy of ANNs
Artificial neural networks (ANN) are adaptive models that can establish
almost any relationship between data. They can be regarded as black boxes
to build mappings between a set of input and output vectors. ANNs are quite
promising in solving problems where traditional models fail, especially
for modeling complex phenomena which show a non-linear relationship.
Neural networks can be roughly divided into three categories:
-
Signal transfer networks. In signal transfer networks, the input
signal is transformed into an output signal. Note that the dimensionality
of the signals may change during this process. The signal is propagated
through the network and is thus changed by the internal mechanism of the
network. Most network models are based on some kind of predefined basis
functions (e.g. Gaussian peaks, as in the case of radial basis function
networks (RBF networks), or sigmoid function
(in the case of multi-layer perceptrons).
-
State transition networks. Examples: Hopfield networks, and Boltzmann
machines.
-
Competitive learning networks. In competitive networks (sometimes
also called self-organizing maps, or SOMs) all the neurons of the network
compete for the input signal. The neuron which "wins" gets the chance to
move towards the input signal in n-dimensional space. Example: Kohonen
feature map.
What these types of networks have in common is that they "learn"
by adapting their network parameters. In general, the learning algorithms
try to minimize the error of the model. This is often a type of gradient
descent approach - with all its pitfalls.
Last Update: 2006-Jän-17