Abstract
The neural architecture is very substantial in order to construct a neural network model that produce a minimum error. Several factors among others include the input choice, the number of hidden layers, the series length, and the activation function. In this paper we present a design of experiment in order to optimize the neural network model. We conduct a simulation study by modeling the data generated from a nonlinear time series model, called subset 3 exponential smoothing transition auto-regressive (ESTAR ([3]). We explore a deep learning model, called deep feedforward network and we compare it to the single hidden layer feedforward neural network. Our experiment resulted in that the input choice is the most important factor in order to improve the forecast performance as well as the deep learning model is the promising approach for forecasting task.
Original language | English |
---|---|
Pages (from-to) | 269-276 |
Number of pages | 8 |
Journal | Procedia Computer Science |
Volume | 144 |
DOIs | |
Publication status | Published - 2018 |
Event | 3rd International Neural Network Society Conference on Big Data and Deep Learning, INNS BDDL 2018 - Sanur, Bali, Indonesia Duration: 17 Apr 2018 → 19 Apr 2018 |
Keywords
- Deep learning
- deep feedforward network
- design of experiment
- forecasting
- time series