Gaussian Splatting, a method in which to reconstruct a Radiance Field, is capable of easily creating hyper real 3D scenes. However, these highly realistic 3D scenes come with a limitation: they require specialized renderers for viewing, making them incompatible with many common 3D applications. This gap in compatibility presents a significant barrier for those who want to incorporate Gaussian Splatting into more versatile workflows, particularly when using standard 3D visualization tools that support formats like point clouds.
In response to this issue, a new open-source GitHub repository has been launched, 3DGS-to-PC, providing scripts that convert 3D Gaussian Splatting scenes into dense point clouds. This conversion makes the reconstructed 3D scenes more accessible, allowing them to be handled by a broader range of software compatible with point clouds, which are widely used in various fields from virtual reality to industrial engineering.
Most existing tools that attempt to convert Gaussian Splatting representations into point clouds do so by simplifying the Gaussians to their central points, resulting in sparse point clouds. The newly developed repository addresses this limitation by providing an approach that results in high-quality, dense point clouds, closely imitating the detail and quality of the original Gaussian Splatting representations.
The process leverages a multivariate normal distribution to sample points from each Gaussian, with the number of sampled points determined by the Gaussian's size. Larger Gaussians thus generate more points, creating a more accurate representation of the scene, including finer details and more realistic depth. To avoid visual inaccuracies, the method ensures that sampled points do not exceed a distance of two standard deviations from the Gaussian center, reducing outlier interference and improving visual consistency.
One of the key challenges faced during the development was ensuring accurate color representation in the converted point clouds. Gaussians that are occluded by others often contain incorrect color data that would lead to inconsistencies if used directly in the point cloud. To tackle this, developer Lewis Stuart implemented a custom Pytorch-based renderer, adapted from the Torch Splatting project, to track each Gaussian's contribution to the final pixel color. This renderer assigns a more representative color to each Gaussian based on its visual contribution from various perspectives, leading to a point cloud that retains the color integrity of the original scene.
While this solution effectively solved the color mismatch problem, it does come at a cost. The rendering process using Pytorch is notably slower compared to CUDA implementations. The developers noted that a future enhancement might involve transitioning this solution to CUDA to significantly speed up the point cloud generation process.
The current implementation requires predefined camera transforms to determine which perspectives to render, which could be streamlined by developing a method that automatically selects optimal camera positions based on the scene itself. Additionally, while the use of Torch allows flexibility in modifying the renderer without changing the underlying Gaussian Splatting codebase, it sacrifices performance, resulting in slower rendering times.
With ongoing development, including community contributions, these scripts could significantly enhance the usability of Gaussian Splatting data, opening doors for broader applications across industries that rely on 3D data visualization.
For those interested in trying out the conversion themselves, the GitHub repository is available for cloning with an Apache 2.0 license, making it commercially viable. The script takes as input a Gaussian Splat file (either in .ply or .splat format) and offers various arguments to customize the output, such as setting the number of points or adjusting color quality. While the process is computationally intensive, the detailed output and customization options provide significant value for those seeking to leverage Gaussian Splatting data in more traditional point cloud workflows.
To properly run, you will need to have roughly 5GB of VRAM on your GPU available. Depending on the parameters that are set for the point cloud, the processing time can range between 5-20 minutes, though the more complicated the scene, the longer the potential reconstruction time.
The repository credits the original works of both 3D Gaussian Splatting and Torch Splatting, showcasing how foundational technologies can be iteratively built upon to solve emerging challenges in the 3D visualization space.