TY - JOUR
T1 - A 3D template-based point generation network for 3D reconstruction from single images
AU - Yuniarti, Anny
AU - Arifin, Agus Zainal
AU - Suciati, Nanik
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/11
Y1 - 2021/11
N2 - Learning-based approaches in 3D reconstruction problem have attracted researchers, due to the excellent performance of this approach in image segmentation and image classification. The increasing attention to the learning- based approach for the 3D reconstruction application is also due to the availability of 3D datasets shared publicly, such as ShapeNet and ModelNet datasets. Several deep learning approaches use voxel representation-based approaches. However, voxel-based methods suffer from inefficiency and inability to create higher dimensional 3D results. Another representation is by using point cloud representation, an unstructured 3D points in the object's surface. However, learning such irregular structures is a challenging task due to the unordered properties of such representations. This paper proposes a new framework for 3D reconstruction of 2D images that introduces a 3D template-based point generation network. The 3D template-based point generation network infers a 3D template and generates 3D point clouds representing the reconstructed 3D object, based on an input image. The proposed network introduces two inputs, the encoded 2D image and the encoded 3D point template produced by an image classification module and a 3D template generation module. Experiments on the ShapeNet dataset show better performance than existing methods in terms of the Chamfer distance between the 3D ground-truth data and the 3D reconstructed data.
AB - Learning-based approaches in 3D reconstruction problem have attracted researchers, due to the excellent performance of this approach in image segmentation and image classification. The increasing attention to the learning- based approach for the 3D reconstruction application is also due to the availability of 3D datasets shared publicly, such as ShapeNet and ModelNet datasets. Several deep learning approaches use voxel representation-based approaches. However, voxel-based methods suffer from inefficiency and inability to create higher dimensional 3D results. Another representation is by using point cloud representation, an unstructured 3D points in the object's surface. However, learning such irregular structures is a challenging task due to the unordered properties of such representations. This paper proposes a new framework for 3D reconstruction of 2D images that introduces a 3D template-based point generation network. The 3D template-based point generation network infers a 3D template and generates 3D point clouds representing the reconstructed 3D object, based on an input image. The proposed network introduces two inputs, the encoded 2D image and the encoded 3D point template produced by an image classification module and a 3D template generation module. Experiments on the ShapeNet dataset show better performance than existing methods in terms of the Chamfer distance between the 3D ground-truth data and the 3D reconstructed data.
KW - 3D reconstruction
KW - Point cloud
KW - Point generation network
KW - Single view reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85111596205&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2021.107749
DO - 10.1016/j.asoc.2021.107749
M3 - Article
AN - SCOPUS:85111596205
SN - 1568-4946
VL - 111
JO - Applied Soft Computing
JF - Applied Soft Computing
M1 - 107749
ER -