

Michael Rubloff
Jun 17, 2025
To create a radiance field, one needs multi view images of the requested object or scene. However, creating these from synthetic data is actually quite achievable and today another plugin for Blender has been released. The newest addition to this growing toolkit is SphereShot, an advanced Blender plugin purpose built for automating spherical camera placement, lighting setup, and rendering.
SphereShot employs Fibonacci spiral placement to evenly distribute cameras across a sphere around the target object. This uniformity helps avoid gaps in coverage that can lead to reconstruction artifacts. The plugin offers flexibility in how many cameras are used, ranging from as few as a single viewpoint to several hundred, depending on the needs of the downstream reconstruction method.
But proper camera placement is only part of the challenge. Lighting plays a critical role in ensuring that the captured images are consistent and free from harsh shadows or uneven highlights. SphereShot comes with a professional lighting system that automatically deploys an omnidirectional multi-light configuration around the scene. Using a similar Fibonacci distribution, the plugin evenly positions up to 16 lights (or more, if desired), creating a soft, shadow-free environment that maximizes surface detail and minimizes noise. Light intensity can be fine-tuned anywhere between 0.1W and 10,000W, allowing users to adapt the setup to their object’s reflectivity and size.
With just a few clicks, users can generate COLMAP compatible outputs that include correctly transformed camera intrinsics (using the SIMPLE_PINHOLE model), accurate pose information using quaternion-based rotations, and colored point clouds.
SphereShot even handles point cloud generation directly inside Blender, drawing from both material color and vertex color data. Users can control how many points are sampled. It includes robust object management features that automatically organize the cameras, lights, and boundary elements into Blender collections. Presets can be saved and loaded using simple JSON files, and real-time statistics provide detailed insight into object counts, render times, and success rates. During longer render runs, SphereShot gives users visual progress updates directly in the viewport.
Once rendering is complete, SphereShot outputs a clean folder structure that is ready for use in downstream tools. Rendered images, COLMAP files, point clouds, camera presets, and CSV files are all neatly organized, allowing users to immediately begin reconstruction tasks or further refine their datasets. For those working with PostShot or similar Gaussian Splatting pipelines, importing the output is straightforward, accelerating the workflow from object capture to model training.
Released under the MIT License, SphereShot is available as an open source download directly from GitHub.