Gaussian splatting

Last updated
Video rendered from a 3D gaussian splatting model

Gaussian splatting is a volume rendering technique that deals with the direct rendering of volume data without converting the data into surface or line primitives. [1] The technique was originally introduced as splatting by Lee Westover in the early 1990s. [2] [3]

Contents

This technique was revitalized and exploded in popularity in 2023, when a research group from Inria proposed the seminal 3D Gaussian splatting that offers real-time radiance field rendering. Like other radiance field methods, it can convert multiple images into a representation of 3D space, then use the representation to create images as seen from new angles. [4] Multiple works soon followed, such as 3D temporal Gaussian splatting that offers real-time dynamic scene rendering. [5]

3D Gaussian splatting

Gaussian splatting model of a collapsed building taken from drone footage 3dgs viewer QIBWA66MsC.jpg
Gaussian splatting model of a collapsed building taken from drone footage

3D Gaussian splatting (3DGS) is a technique used in the field of real-time radiance field rendering.[ definition needed ] [4] It enables the creation of high-quality real-time novel-view scenes by combining multiple photos or videos, addressing a significant challenge in the field.

The method represents scenes with 3D Gaussians that retain properties of continuous volumetric radiance fields, integrating sparse points produced during camera calibration. It introduces an Anisotropic representation using 3D Gaussians to model radiance fields, along with an interleaved optimization and density control of the Gaussians. A fast visibility-aware rendering algorithm supporting anisotropic splatting is also proposed, catering to GPU usage. [4]

Method

Training

This diagram illustrates the working of the proposed algorithm. Methodology.png
This diagram illustrates the working of the proposed algorithm.

The method involves several key steps:

  • Input: A set of images of a static scene along with camera positions, expressed as a sparse point cloud.
  • 3D Gaussians: Definition of mean, covariance matrix, and opacity for each Gaussian.
  • Color representation: Using spherical harmonics to model view-dependent appearance.
  • Optimization algorithm: Optimizing the parameters using stochastic gradient descent to minimize a loss function combining L1 loss and D-SSIM, inspired by the Plenoxels work. [6]
  • Rasterizer: Implementing a tile-based rasterizer for fast sorting and backward pass, enabling efficient blending of Gaussian components.

The method uses differentiable 3D Gaussian splatting, which is unstructured and explicit, allowing rapid rendering and projection to 2D splats. The covariance of the Gaussians can be thought of as configurations of an ellipsoid, which can be mathematically decomposed into a scaling matrix and a rotation matrix. The gradients for all parameters are derived explicitly to overcome any overhead due to autodiff.[ citation needed ]

Each step of rendering is followed by a comparison to the training views available in the dataset. The optimization uses the difference to create a dense set of 3D Gaussians that represent the scene as accurately as possible.[ citation needed ]

Using

An optimized set of 3D Gaussians is saved onto the computer. Like in the training step, a renderer creates a view from these Gaussians.[ citation needed ]

Several sets of Gaussians can be composed together into larger scenes.[ citation needed ]

Results and evaluation

Scene captured on Polycam Capture Polycam gaussian splatting exemplo.png
Scene captured on Polycam Capture

The authors[ who? ] tested their algorithm on 13 real scenes from previously published datasets and the synthetic Blender dataset. [7] They compared their method against state-of-the-art techniques like Mip-NeRF360, [8] InstantNGP, [9] and Plenoxels. [6] Quantitative evaluation metrics used were PSNR, L-PIPS, and SSIM.

Their fully converged model (30,000 iterations) achieves quality on par with or slightly better than Mip-NeRF360, [8] but with significantly reduced training time (35–45 minutes vs. 48 hours) and faster rendering (real-time vs. 10 seconds per frame). At 7,000 iterations (5–10 minutes of training), their method achieves comparable quality to InstantNGP [9] and Plenoxels. [6]

For synthetic bounded scenes (Blender dataset [7] ), they achieved state-of-the-art results even with random initialization, starting from 100,000 uniformly random Gaussians.

Limitations

Some limitations of the method include:[ citation needed ]

The authors[ who? ] note that some of these limitations could potentially be addressed through future improvements like better culling approaches, antialiasing, regularization, and compression techniques.

3D Temporal Gaussian splatting

Extending 3D Gaussian splatting to dynamic scenes, 3D Temporal Gaussian splatting incorporates a time component, allowing for real-time rendering of dynamic scenes with high resolutions. [5] It represents and renders dynamic scenes by modeling complex motions while maintaining efficiency. The method uses a HexPlane to connect adjacent Gaussians, providing an accurate representation of position and shape deformations. By utilizing only a single set of canonical 3D Gaussians and predictive analytics, it models how they move over different timestamps. [10]

It is sometimes referred to as "4D Gaussian splatting"; however, this naming convention implies the use of 4D Gaussian primitives (parameterized by a 4×4 mean and a 4×4 covariance matrix). Most work in this area still employs 3D Gaussian primitives, applying temporal constraints as an extra parameter of optimization.[ citation needed ]

Achievements of this technique include real-time rendering on dynamic scenes with high resolutions, while maintaining quality. It showcases potential applications for future developments in film and other media, although there are current limitations regarding the length of motion captured. [10]

Applications

3D Gaussian splatting has been adapted and extended across various computer vision and graphics applications, from dynamic scene rendering to autonomous driving simulations and 4D content creation:

See also

References

  1. Westover, Lee Alan (July 1991). "SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm" (PDF). Retrieved October 18, 2023.
  2. Huang, Jian (Spring 2002). "Splatting" (PPT). Retrieved 5 August 2011.
  3. Westover, Lee. "SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm" (PDF).
  4. 1 2 3 Kerbl, Bernhard; Kopanas, Georgios; Leimkuehler, Thomas; Drettakis, George (2023-07-26). "3D Gaussian Splatting for Real-Time Radiance Field Rendering". ACM Transactions on Graphics . 42 (4): 139:1–139:14. arXiv: 2308.04079 . doi:10.1145/3592433. ISSN   0730-0301.
  5. 1 2 Wu, Guanjun; Yi, Taoran; Fang, Jiemin; Xie, Lingxi; Zhang, Xiaopeng; Wei, Wei; Liu, Wenyu; Tian, Qi; Wang, Xinggang (June 2024). 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 20310–20320. arXiv: 2310.08528 . doi:10.1109/CVPR52733.2024.01920.
  6. 1 2 3 Fridovich-Keil, Sara; Yu, Alex; Tancik, Matthew; Chen, Qinhong; Recht, Benjamin; Kanazawa, Angjoo (June 2022). "Plenoxels: Radiance Fields without Neural Networks". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5491–5500. arXiv: 2112.05131 . doi:10.1109/cvpr52688.2022.00542. ISBN   978-1-6654-6946-3.
  7. 1 2 Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020), "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" , Lecture Notes in Computer Science, Cham: Springer International Publishing, pp. 405–421, doi:10.1007/978-3-030-58452-8_24, ISBN   978-3-030-58451-1 , retrieved 2024-09-25
  8. 1 2 Barron, Jonathan T.; Mildenhall, Ben; Verbin, Dor; Srinivasan, Pratul P.; Hedman, Peter (June 2022). "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5460–5469. arXiv: 2111.12077 . doi:10.1109/cvpr52688.2022.00539. ISBN   978-1-6654-6946-3.
  9. 1 2 Müller, Thomas; Evans, Alex; Schied, Christoph; Keller, Alexander (July 2022). "Instant neural graphics primitives with a multiresolution hash encoding". ACM Transactions on Graphics . 41 (4): 1–15. arXiv: 2201.05989 . doi:10.1145/3528223.3530127. ISSN   0730-0301.
  10. 1 2 Franzen, Carl (16 October 2023). "Actors' worst fears come true? New 3D Temporal Gaussian Splatting method captures human motion". venturebeat.com. VentureBeat . Retrieved October 18, 2023.
  11. Chen, Zilong; Wang, Feng; Wang, Yikai; Liu, Huaping (2024-06-16). "Text-to-3D using Gaussian Splatting". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vol. abs/2211.0 1324. IEEE. pp. 21401–21412. arXiv: 2309.16585 . doi:10.1109/cvpr52733.2024.02022. ISBN   979-8-3503-5300-6.
  12. Chen, Li; Wu, Penghao; Chitta, Kashyap; Jaeger, Bernhard; Geiger, Andreas; Li, Hongyang (2024). "End-to-end Autonomous Driving: Challenges and Frontiers". IEEE Transactions on Pattern Analysis and Machine Intelligence. PP (12): 10164–10183. arXiv: 2306.16927 . Bibcode:2024ITPAM..4610164C. doi:10.1109/tpami.2024.3435937. ISSN   0162-8828. PMID   39078757.
  13. Guédon, Antoine; Lepetit, Vincent (2024-06-16). "SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5354–5363. arXiv: 2311.12775 . doi:10.1109/cvpr52733.2024.00512. ISBN   979-8-3503-5300-6.
  14. Keetha, Nikhil; Karhade, Jay; Jatavallabhula, Krishna Murthy; Yang, Gengshan; Scherer, Sebastian; Ramanan, Deva; Luiten, Jonathon (2024-06-16). "SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 21357–21366. doi:10.1109/cvpr52733.2024.02018. ISBN   979-8-3503-5300-6.
  15. Ling, Huan; Kim, Seung Wook; Torralba, Antonio; Fidler, Sanja; Kreis, Karsten (2024-06-16). "Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8576–8588. arXiv: 2312.13763 . doi:10.1109/cvpr52733.2024.00819. ISBN   979-8-3503-5300-6.