東北大学 大学院情報科学研究科 情報基礎科学専攻 計算機構論分野
(東北大学 工学部 電気情報物理工学科 情報工学コース)
青木・伊藤(康)研究室

Sparse2DGS: Sparse-View Surface Reconstruction using 2D Gaussian Splatting with Dense Point Cloud

Natsuki Takama (Tohoku University) , Shintaro Ito (Tohoku University) , Koichi Ito (Tohoku University) , Hwann-Tzong Chen (National Tsing Hua University) , Takafumi Aoki (Tohoku University)
IEEE International Conference on Image Processing, pp. 2844--2849, September 2025.
Graphical Abstract
Abstract

Gaussian Splatting (GS) has gained attention as a fast and effective method for novel view synthesis. It has also been applied to 3D reconstruction using multi-view images and can achieve fast and accurate 3D reconstruction. However, GS assumes that the input contains a large number of multi-view images, and therefore, the reconstruction accuracy significantly decreases when only a limited number of input images are available. One of the main reasons is the in- sufficient number of 3D points in the sparse point cloud obtained through Structure from Motion (SfM), which results in a poor initialization for optimizing the Gaussian primitives. We propose a new 3D reconstruction method, called Sparse2DGS, to enhance 2DGS in reconstructing objects using only three images. Sparse2DGS employs DUSt3R, a fundamental model for stereo images, along with COLMAP MVS to generate highly accurate and dense 3D point clouds, which are then used to initialize 2D Gaussians. Through experiments on the DTU dataset, we show that Sparse2DGS can ac- curately reconstruct the 3D shapes of objects using just three images.

戻る