You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information. |
See also: time series model building, recurrent networks |
When dealing with neural networks, the model-finding
process is very similar to that for ARIMA models. The three phases,
can also be distinguished, but usually the
terminology is quite different. Moreover, the heuristics guiding the model
selection process are not as detailed as for ARIMA
models. This is partly due to the non-linear mapping produced by the neural
networks. A thorough analysis of the time series, involving at least a
trend analysis, checking seasonal patterns, and checking the autocorrelation
is definitely advisable, because it reveals relevant properties characterizing
the data. When using a standard feed-forward neural network, the number
of layers and units can be selected. Then, various properties of the training
algorithm may be altered. In general, the target is to find small models,
because they have fewer degrees of freedom. Therefore, they require less
training data for obtaining reliable results. Huge amounts of data are
needed for large neural networks. The hidden layers should be small enough
to allow generalization, and large enough to produce the required mapping.
The input consists of the preceding sequence elements. The neural network is trained to forecast the next sequence element. (Annotation: this resembles an AR-model from the class of ARIMA models, where the window size is equal to the order of the AR-model). Using a time window has the advantage that standard neural networks can be used. Therefore, they can be simulated with standard neural network modeling tools.
[1] | E.A. Wan
Temporal Backpropagation: An Efficient Algorithm for Finite Impulse Response Neural Networks in: D.S. Touretzky, editor, Connectionist Models, pages 131-137. Morgan Kaufmann Publishers, San Mateo, CA, 1990. |
Last Update: 2004-Jul-03