Parallel Domain Interview

Michael Rubloff

Michael Rubloff

Aug 27, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
PD Replica
PD Replica

Recently, I spoke to Parallel Domain, founded in 2017, who has been trying to accelerate the development of machine perception systems by generating realistic synthetic sensor data for testing and training.

Most recently, Parallel Domain expanded its capabilities with the introduction of PD Replica, a tool that enables simulations across various industries by converting real-world scans into highly detailed, simulation-ready 3D environments. PD Replica bridges the gap between simulation and reality, offering a more accurate and comprehensive testing ground, particularly in sectors such as drone autonomy. Please enjoy the interview, below.

Can you give a summary of the history of Parallel Domain and what the company does? 

Parallel Domain was founded in 2017 with the mission to accelerate the development of machine perception. We specialize in providing simulation software that generates realistic synthetic sensor data, which developers use to test and train machine perception systems. Our primary product, Data Lab generates synthetic camera, LiDAR, and radar outputs for critical, risky, and rare scenarios in a controlled simulation environment. This approach allows developers to safely and effectively test their systems before deploying them in the real world. 

Most recently, you announced Parallel Domain Replica. Can you speak about how it extends the business’s capabilities? 

PD Replica significantly extends our business capabilities. It enables simulations across new industry verticals, allows customers to create custom private scene reconstructions, and brings software-in-the-loop testing closer to real-world validation. Historically focused on the automotive sector, we can now simulate environments within any 3D scan, such as warehouses, backyards, agricultural fields, hotels, and hospitals. By converting real-world scans into simulation-ready 3D environments, PD Replica bridges the gap between simulated and real-world testing, providing a more accurate and comprehensive testing ground for various industries.

How does PD Replica leverage AI and 3D reconstruction techniques, specifically radiance fields to create high-fidelity digital twins? 

PD Replica enables a user to take video or image scans of an environment and turn that into a full 3d, simulation-ready reconstruction. Our technology takes inspiration from the latest 3d reconstruction advancements (NeRF, Gaussian Splatting, and more) and combines them with state-of-the-art rendering advancements to produce highly detailed simulations. Additionally, our systems create extra channels of scene geometry and metadata to provide things like physics simulation and semantic scene understanding.

Can you explain the core technology behind PD Replica and how it differs from your previous offerings?

Our previous offerings were based on classical triangular mesh rendering technology. Our team had developed an extensive system of procedural generation algorithms that would generate meshes to represent roads, terrain, buildings, etc. There were two shortcomings from this approach: 1) the amount of variety in the real world is impossible to capture with manually written rulests and 2) even if you write incredibly sophisticated procedural algorithms, you’re always trying to distill the chaos of the real world into rules, rather than just capturing that chaos as an input.

PD Replica overcomes these limitations by leveraging image-only data, typically 2,000-4,000 images per Replica, captured from various angles. The core differentiation lies in our proprietary processing stack, which reconstructs highly detailed environments with semantic segmentation, ground meshes, depth maps, and environment maps, making them suitable for simulation. These environments integrate seamlessly with our existing Parallel Domain Data Lab product, allowing developers to perform consistent simulations in both procedurally generated and PD Replica environments.

What is an industry or use case that you’re very excited about being able to leverage PD Replica?

We are particularly excited about the potential applications of PD Replica in the field of drone autonomy. As drones become increasingly integral to industries ranging from delivery services to emergency response, the need for robust and reliable autonomous systems is paramount.

PD Replica allows us to create highly detailed and accurate synthetic environments that can mimic the complexities and nuances of the real-world. For instance, in the context of drone delivery, we can simulate a variety of backyard conditions, from dense urban settings to rural landscapes, each with its own set of obstacles like trees, power lines, and agents like people or pets walking into the landing zone. This enables machine learning teams to test and train drone perception systems in a safe, controlled, and highly varied environment, ensuring they can handle a wide range of challenges before deployment. 

The flexibility of PD Replica allows for continuous iteration and improvement. As drone companies gather more data and encounter new challenges, they can quickly adapt PD Replica environments to reflect these insights, ensuring that autonomous systems are always at the cutting edge of performance and reliability.


Can you provide examples of how PD Replica has been used in real-world testing scenarios? What are some examples of increased productivity that businesses have been able to achieve?

PD Replica has been used in a variety of testing and training scenarios, such as object detection and depth estimation. One notable example is in the development of automated parking space detection. Many of our automotive customers develop automated parking solutions, and space detection is a difficult problem to solve. In order to detect a space, they need perception systems that can automatically detect parking spaces across a variety of conditions, even when other people or cars may be in the way. We were able to generate simulated parking scenarios across a large variety of US and European parking lot Replicas, ultimately accelerating the customer’s testing and training efforts. When compared to doing this with traditional real-world data methods, our customers often shave entire months off of their development timelines and end up with more precise and reliable systems.

Can you discuss the customization options available within PD Replica for different testing scenarios?

When a user creates a simulation inside of a replica, they have the ability to control nearly all elements of the dynamic agents in that scene. They can add people, vehicles, props, and more. Our customers create scenarios for jaywalker detection, scene understanding, object detection for drone delivery, and more. 

Do you offer different pricing tiers or packages based on the scale or needs of different customers?

All current Parallel Domain customers have access to our base package of PD Replicas covering a wide range of common environments automotive and drone delivery scenarios. We also offer a higher tier that allows customers to use their own images or videos to create custom, private PD Replicas or request custom PD Replica creation for specific scenarios. While we currently work with large, well-established automotive and drone delivery companies, we welcome customers of all sizes looking to accelerate the development of their machine perception systems.

What kind of support, onboarding, and training do you offer to new customers adopting PD Replica?

PD Replicas integrate seamlessly into our existing product offering as an additional location option alongside procedurally generated locations. However, we recognize that most developers may not be familiar with these types of locations. To assist with this, we provide comprehensive documentation and a dedicated support team to help customers utilize this new feature. Our domain expertise guides machine learning teams in adopting best practices for synthetic data generation and running scenarios within PD Replicas. We are known for our white glove treatment, offering customizable levels of support. Our robust onboarding process includes dedicated field engineers who work alongside developers to ensure success within our platform.

What are the next steps for Parallel Domain in terms of product development and innovation?

PD Replicas mark a significant advancement for machine learning teams by providing environments that closely mimic the real world. Our next steps involve enhancing these capabilities even further. We are developing features to control environmental lighting conditions enabling daytime, night time, and cloudy scenarios from the same data. Additionally, we continually improve our process for semantic segmentation, object removal, and scene cleanup to ensure the highest quality scene reconstructions. Lastly, we are also focused on improving the processing of customer-captured imagery, addressing challenges like rolling shutter effects and color space compression, to generate high-quality PD Replica reconstructions from a wide range of image sources.

How do you ensure the security and privacy of the data used in creating digital twins with PD Replica?

We prioritize security and privacy by working with established automotive and drone delivery companies that share our commitment to these principles. All digital twins are created with explicit permissions, and we ensure that personally identifiable information, such as license plates, is removed from the final PD Replica reconstructions. When customers build private PD Replica libraries, their data is safeguarded with enterprise-grade encryption and stringent access controls. This approach guarantees that sensitive information is protected throughout the entire process.

Featured

Recents

Featured

Platforms

Into the Scaniverse: Scaniverse in VR

Scaniverse is now available on both the Quest 3 and 3S.

Michael Rubloff

Dec 10, 2024

Platforms

Into the Scaniverse: Scaniverse in VR

Scaniverse is now available on both the Quest 3 and 3S.

Michael Rubloff

Dec 10, 2024

Platforms

Into the Scaniverse: Scaniverse in VR

Scaniverse is now available on both the Quest 3 and 3S.

Michael Rubloff

Platforms

Reflct: Simple 3DGS Viewer for Ecommerce

The clean 3DGS viewer for Ecommerce with SH support has begun its closed beta.

Michael Rubloff

Dec 9, 2024

Platforms

Reflct: Simple 3DGS Viewer for Ecommerce

The clean 3DGS viewer for Ecommerce with SH support has begun its closed beta.

Michael Rubloff

Dec 9, 2024

Platforms

Reflct: Simple 3DGS Viewer for Ecommerce

The clean 3DGS viewer for Ecommerce with SH support has begun its closed beta.

Michael Rubloff

Research

3D Convex Splatting Radiance Field Rendering

3D Convex Splatting is another new way to render a Radiance Field. It is not Gaussian Splatting.

Michael Rubloff

Dec 6, 2024

Research

3D Convex Splatting Radiance Field Rendering

3D Convex Splatting is another new way to render a Radiance Field. It is not Gaussian Splatting.

Michael Rubloff

Dec 6, 2024

Research

3D Convex Splatting Radiance Field Rendering

3D Convex Splatting is another new way to render a Radiance Field. It is not Gaussian Splatting.

Michael Rubloff

Platforms

Frame Integrates Gaussian Splatting

Powered by Babylon.js, Frame now supports 3DGS imports.

Michael Rubloff

Dec 5, 2024

Platforms

Frame Integrates Gaussian Splatting

Powered by Babylon.js, Frame now supports 3DGS imports.

Michael Rubloff

Dec 5, 2024

Platforms

Frame Integrates Gaussian Splatting

Powered by Babylon.js, Frame now supports 3DGS imports.

Michael Rubloff