Long Short-Term Memory is a recurrent neural network that can be trained to remember long sequences of data and acts as a generative model. As a generative model, Long Short-Term Memory has the ability to reproduce the trained sequences for arbitrary length. We train Long Short-Term Memory with sequential motion data of Remo Dance, a traditional dance from East Java. Motion data is acquired from a motion capture system from real dancer as sequences of bone rotation. Training sequential data on Long Short-Term Memory is time consuming even by using current GPU technology. We found that applying feature scaling and how data are grouped to be trained together are useful strategies to achieve optimal training. Our experiments show that the scale factor in feature scaling depends on how many sequences are trained together. Single sequence trainings need value range from -8 to +8. Multiple sequences need a lower value range accordingly. We also found that sequences with small variances can be trained better when combined with large variances sequences. Trained Long Short-Term Memory is able to reproduce the dance moves with some variations.