

Michael Rubloff
Apr 16, 2025
Back in November, we looked at an open-source GitHub repository that introduced a promising solution: scripts for converting 3D Gaussian Splatting scenes into dense point clouds. This framework made it possible to view and manipulate Gaussian-based 3D reconstructions using traditional point cloud tools. This conversion makes the reconstructed 3D scenes more accessible, allowing them to be handled by a broader range of software compatible with point clouds. Now the repo is receiving a major update, with some awesome new features.
Most existing tools that attempt to convert Gaussian Splatting representations into point clouds do so by simplifying the Gaussians to their central points, resulting in sparse and often inadequate point clouds. The newly developed repository addresses this limitation by providing a more sophisticated approach that generates high-quality, dense point clouds, closely imitating the detail and quality of the original Gaussian Splatting representations.
The process leverages a multivariate normal distribution to sample points from each Gaussian, with the number of sampled points determined by the Gaussian's size. Larger Gaussians thus generate more points, creating a more accurate representation of the scene, including finer details and more realistic depth. To avoid visual inaccuracies, the method ensures that sampled points do not exceed a distance of two standard deviations from the Gaussian center, reducing outlier interference and improving visual consistency.
Overcoming Color Mismatch: A CUDA-Powered Rendering Solution
One of the key challenges faced during the initial development was ensuring accurate color representation in the converted point clouds. Gaussians that are occluded by others often contain incorrect color data that would lead to inconsistencies if used directly in the point cloud. The original solution involved a custom Pytorch-based renderer adapted from the Torch Splatting project, which tracked each Gaussian's contribution to final pixel colors to assign more accurate representations.
Now, the framework’s color renderer has been completely rewritten and optimized in CUDA, delivering a dramatic 200x speed-up compared to the original Python-based implementation. This performance leap reduces conversion times from several minutes to under a minute, even for complex scenes.
For instance, the Mip-NeRF bicycle scene, which contains roughly 6 million Gaussians, was converted into a dense point cloud with 20 million points in about 60 seconds using an RTX 2080 Ti graphics card. This improvement marks a substantial leap forward.
Smarter Filtering with Visibility Thresholds
In addition to performance improvements, the latest update introduces a new visibility threshold parameter. This allows users to intelligently cull Gaussians that contribute little or no visible data to the rendered image. By removing low-contribution Gaussians, the resulting point clouds are not only cleaner but also more representative of the actual visual content.
For those interested in trying out the conversion themselves, the GitHub repository is available for cloning and comes with an Apache 2.0 License, meaning it can be used commercially. The script takes as input a Gaussian Splat file (either in .ply or .splat format) and offers various arguments to customize the output, such as setting the number of points or adjusting color quality.