TY - GEN
T1 - Controlling the hidden layers' output to optimizing the training process in the Deep Neural Network algorithm
AU - Andreas,
AU - Purnomo, Mauridhi Hery
AU - Hariadi, Mochamad
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/10/2
Y1 - 2015/10/2
N2 - Deep learning is one of the most recent development form of Artificial Neural Network (ANN) in machine learning. Deep Neural Network (DNN) algorithm is usually used in image and speech recognition applications. As the development of Artificial Neural Network, very possible there are so many hidden layers in Deep Neural Network. In DNN, the output of each node is a quadratic function of its inputs. The DNN training process is very difficult. In this paper, we try to optimizing the training process by slightly construct of the deep architecture and combines several existing algorithms. Output's error of each unit in the previous layer will be calculated. The weight of the unit with the smallest error will be maintained in the next iteration. This paper uses MNIST handwriting images as its data training and data test. After doing some tests, it can be concluded that the optimization by selecting any output in each hidden layer, the DNN training process will be faster approximately 8%.
AB - Deep learning is one of the most recent development form of Artificial Neural Network (ANN) in machine learning. Deep Neural Network (DNN) algorithm is usually used in image and speech recognition applications. As the development of Artificial Neural Network, very possible there are so many hidden layers in Deep Neural Network. In DNN, the output of each node is a quadratic function of its inputs. The DNN training process is very difficult. In this paper, we try to optimizing the training process by slightly construct of the deep architecture and combines several existing algorithms. Output's error of each unit in the previous layer will be calculated. The weight of the unit with the smallest error will be maintained in the next iteration. This paper uses MNIST handwriting images as its data training and data test. After doing some tests, it can be concluded that the optimization by selecting any output in each hidden layer, the DNN training process will be faster approximately 8%.
KW - deep neural network
KW - image and voice recognition
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=84962207876&partnerID=8YFLogxK
U2 - 10.1109/CYBER.2015.7288086
DO - 10.1109/CYBER.2015.7288086
M3 - Conference contribution
AN - SCOPUS:84962207876
T3 - 2015 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, IEEE-CYBER 2015
SP - 1028
EP - 1032
BT - 2015 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, IEEE-CYBER 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, IEEE-CYBER 2015
Y2 - 9 June 2015 through 12 June 2015
ER -