Defining motion area on the face of 3D virtual character starts with the mapping of skeleton movement. Every animated character requires special handling based on the characteristics of the size and location of the bone to support producing facial expressions correctly. This process is often done specifically for each face model to be used. This research tried to use a marker-based motion capture data as a reference for the automation of generating clusters adaptively in the face of 3D characters. Each vertex which forming expression on the faces of the 3D models selected as centroids of cluster and representation a motion area whose numbers will correspond with the number of feature-point markers of motion capture data. Clustering process is done with the synthesis of modified nearest neighbor approach with the feature-point value. The results obtained were able to demonstrate a clustering process for generating motion area in a variety of 3D face model.