TY - GEN
T1 - Applying Hindsight Experience Replay to Procedural Level Generation
AU - Susanto, Evan Kusuma
AU - Tjandrasa, Handayani
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/4/9
Y1 - 2021/4/9
N2 - Designing a video game level requires a precise balance in difficulty adjustment as a level that is too simple will cause players to lose interest quickly. On the other hand, making a level too complicated will frustrate the players, making them abandon the game. We propose a new method to make a level generator that can learn how to design a game level by itself. Our proposed method can be used for different games with only minimal adjustments. We improve the previously proposed method by making our generator able to design a level that satisfies every user's criterion. We do this by combining Procedural Content Generation via Reinforcement Learning with the Hindsight Experience Replay method. We use our model to generate levels from 4 different games and compare the success rate with a random agent. Our model achieves more than 90% success rate for almost every scenario and performs much better when compared to a random agent.
AB - Designing a video game level requires a precise balance in difficulty adjustment as a level that is too simple will cause players to lose interest quickly. On the other hand, making a level too complicated will frustrate the players, making them abandon the game. We propose a new method to make a level generator that can learn how to design a game level by itself. Our proposed method can be used for different games with only minimal adjustments. We improve the previously proposed method by making our generator able to design a level that satisfies every user's criterion. We do this by combining Procedural Content Generation via Reinforcement Learning with the Hindsight Experience Replay method. We use our model to generate levels from 4 different games and compare the success rate with a random agent. Our model achieves more than 90% success rate for almost every scenario and performs much better when compared to a random agent.
KW - Deep Reinforcement Learning
KW - Multi-Goal Reinforcement Learning
KW - Procedural Level Generation
UR - http://www.scopus.com/inward/record.url?scp=85107315407&partnerID=8YFLogxK
U2 - 10.1109/EIConCIT50028.2021.9431893
DO - 10.1109/EIConCIT50028.2021.9431893
M3 - Conference contribution
AN - SCOPUS:85107315407
T3 - 3rd 2021 East Indonesia Conference on Computer and Information Technology, EIConCIT 2021
SP - 427
EP - 432
BT - 3rd 2021 East Indonesia Conference on Computer and Information Technology, EIConCIT 2021
A2 - Alfred, Rayner
A2 - Haviluddin, Haviluddin
A2 - Wibawa, Aji Prasetya
A2 - Santoso, Joan
A2 - Kurniawan, Fachrul
A2 - Junaedi, Hartarto
A2 - Purnawansyah, Purnawansyah
A2 - Setyati, Endang
A2 - Saurik, Herman Thuan To
A2 - Setiawan, Esther Irawati
A2 - Setyaningsih, Eka Rahayu
A2 - Pramana, Edwin
A2 - Kristian, Yosi
A2 - Kelvin, Kelvin
A2 - Purwanto, Devi Dwi
A2 - Kardinata, Eunike
A2 - Anugrah, Prananda
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd East Indonesia Conference on Computer and Information Technology, EIConCIT 2021
Y2 - 9 April 2021 through 11 April 2021
ER -