Design of Experiment to Optimize the Architecture of Deep Learning for Nonlinear Time Series Forecasting

Suhartono, Novri Suhermi*, Dedy Dwi Prastyo

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

6 Citations (Scopus)

Abstract

The neural architecture is very substantial in order to construct a neural network model that produce a minimum error. Several factors among others include the input choice, the number of hidden layers, the series length, and the activation function. In this paper we present a design of experiment in order to optimize the neural network model. We conduct a simulation study by modeling the data generated from a nonlinear time series model, called subset 3 exponential smoothing transition auto-regressive (ESTAR ([3]). We explore a deep learning model, called deep feedforward network and we compare it to the single hidden layer feedforward neural network. Our experiment resulted in that the input choice is the most important factor in order to improve the forecast performance as well as the deep learning model is the promising approach for forecasting task.

Original languageEnglish
Pages (from-to)269-276
Number of pages8
JournalProcedia Computer Science
Volume144
DOIs
Publication statusPublished - 2018
Event3rd International Neural Network Society Conference on Big Data and Deep Learning, INNS BDDL 2018 - Sanur, Bali, Indonesia
Duration: 17 Apr 201819 Apr 2018

Keywords

  • Deep learning
  • deep feedforward network
  • design of experiment
  • forecasting
  • time series

Fingerprint

Dive into the research topics of 'Design of Experiment to Optimize the Architecture of Deep Learning for Nonlinear Time Series Forecasting'. Together they form a unique fingerprint.

Cite this