|You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information.|
|Table of Contents Multivariate Data Modeling Neural Networks Extrapolation|
|See also: generalization, Interpolation and Extrapolation|
Neural networks exhibit a major drawback when compared to linear methods
of function approximation: they cannot extrapolate. This is due to the
fact that a neural network can map virtually any function by adjusting
its parameters according to the presented training data. For regions of
the variable space where no training data is available, the output of a
neural network is not reliable.
Basically, the data space which can be processed by a trained neural network is split into two regions: (1) the region where the data density of training set is greater than zero, and (2) any other part of the data space where the density of the training data is zero (or near zero). For unknown data points which fall into the first region, we use interpolation, all other points have to be estimated by extrapolation.
Let's have a look at an example concerning the performance of neural
networks under extrapolating conditions. In order to simplify the set-up,
we look at one-dimensional input data, which is related to a single response
variable. The training data is shown in the bottom part of the figure below.
Applying 15 trained networks to unknown data results in reponses which
are consistent in the areas where training data was available, while producing
arbitrary outputs anywhere else.
Last Update: 2006-Jšn-17