
Gauss Cannon v1.2.0 ships with a point cloud raytracing overhaul that makes the Blender add on viable for dense scenes, meaning thousands of objects, heavy vegetation, 100 million or more vertices, that would previously stall or never complete.
Gauss Cannon is a Blender add-on by Warpgate Labs that handles the capture prep side of Gaussian Splatting workflows. It places synthetic cameras around a scene, generates keyframe paths, and exports data that can feed directly into training pipelines like nerfstudio, Postshot, and LichtFeld Studio.
The core problem was that Gauss Cannon was rebuilding a BVH (bounding volume hierarchy) tree for every mesh on every frame. For small scenes this was fine. For the complex architectural or vegetation heavy scenes a user flagged, it was unusable. v1.2.0 reworks the pipeline around the fact that geometry is static during a capture export, only the camera moves. Now it's creating a single world space BVH built once for all selected meshes, cutting per ray cost from O(meshes) to a single ray_cast call.
v1.2.0 also precomputes ray directions once per camera setup rather than computing them per frame, adds alpha-skip to bypass transparent pixels before any ray-casting begins, and rewrites PLY export to use preallocated numpy buffers with a single bulk tofile() call. A second round of optimizations landed during code review, replacing Python level vertex iteration with Blender's internal C-based mesh.transform() and foreach_get() for bulk extraction.
The version bump was framed in the PR as a capability change: "scenes that didn't complete now do."
v1.2.0 is available with an MIT License on GitHub.






