Neural reconstruction for editable scenarios in dSPACE Automotive Simulation Models (ASM): Transform recorded drives into digital twins with NVIDIA Omniverse NuRec libraries and extend them into scalable simulation scenarios using dSPACE simulation solutions.
From Recorded Sensor Data to Scalable Simulation Workflow
The validation of advanced driver assistance systems (ADAS) and automated driving (AD) requires simulation workflows that scale beyond physical test drives to achieve the necessary coverage of test kilometers to bring safe driving functions to the road. Physics-based simulation remains a core pillar of this process, enabling deterministic and reproducible testing.
At the same time, the world in which autonomous vehicles operate is extremely diverse. As vehicle fleets scale to greater numbers of miles – both in development and production – neural reconstruction technologies are creating a new type of simulation environment that is inherently scalable allowing development teams to generate digital twins directly from recorded sensor log data. NVIDIA Omniverse NuRec libraries for 3D Gaussian splatting provide a reconstruction pipeline for processing recorded sensor data into 3D simulation environments with dynamic actors that can be replayed and modified. And when combined with a runtime and other key components such as physics and behavioral models, can serve as the foundation for a complete closed-loop simulation solution that is scalable.
To support this, dSPACE is integrating NuRec into the dSPACE Automotive Simulation Models (ASM) tool suite, connecting neural reconstruction with established vehicle dynamics and traffic simulation models.
Scalability and realism are where neural reconstruction shines (figure 2). With neural reconstruction (based on Gaussian splatting), you can create an interactive digital twin of every scenario in your data set and modify the scenario to create these interesting “what if” (edge/corner) cases, to challenge your AV stack to take the right action. This technology, coupled with powerful vehicle dynamics, and closed-loop real-time simulation, enable us to test these “what-if” scenarios.
What is 3D Gaussian splatting?
Neural radiance fields (NeRFs) were one of the first developments of using deep learning approaches for scene reconstruction. While NeRFs were able to reconstruct scenes with remarkable visual quality, the inference speed was far from being real-time-capable.
That changed with the (re-)introduction of 3D Gaussian splatting (3DGS) which combines the advantages of differentiable optimization techniques (as in NeRFs) and the efficiency of rasterization-based renderers. In GS, a scene is modeled as an unstructured set of millions of anisotropic 3D-Gaussian ellipsoids. This method allows training times that are often several orders of magnitude faster than NeRFs, but also real-time-capable during rendering at 100+ FPS (depending on the resolution).
Let us take a look at the “atom” of GS: the 3D Gaussian. In contrast to points in a point cloud, 3D Gaussians exhibit spatial spread and orientation allowing them to approximate continuous surfaces and complex textures. Figure 1 visualizes how these Gaussians can be used to reconstruct, for example, a car. A 3D Gaussian is characterized by four properties:
- Center/Position µ: center of the ellipsoid in space
- Covariance matrix: determines shape, scale, and orientation of the ellipsoid. The covariance matrix is typically factored into a rotation matrix and a (diagonal) scaling matrix. This approach offers an intuitive geometric interpretation. The optimization algorithm can scale the Gaussians – stretching, compressing, or rotating them arbitrarily in 3D space – to fit the geometry of the scene. For instance, the Gaussian can be flat to represent a wall or extremely elongated to represent a cable.
- Opacity: determines transparency of the ellipsoid
- Color: represented by spherical harmonic (SH) coefficients, which allows modeling of view-dependent effects. One major aspect of photorealistic scenarios is that the color of the surface point changes depending on the viewing angle. If the Gaussians only stored colors as RGB values, the scene would appear “flat”. To overcome this limitation, 3DGS uses spherical harmonics, which store around sixteen coefficients per color channel. During the rendering phase, the direction of view relative to the Gaussian is evaluated and the spherical harmonics are summed up to calculate the resulting RGB color. This enables complex lighting effects.
Integrating NVIDIA NuRec into the dSPACE Tool Chain
By connecting the dSPACE toolchain to NuRec (figure 4), we take sensor resimulation a step further. Rather than limiting ourselves to simple data replays, users can now simulate new or adapted driving scenarios in the reconstructed environments.
This is achieved by connecting NuRec to the dSPACE ASM OpenX solution, which allows simulating user-defined scenarios in the OpenSCENARIO Standard format. This makes it possible to replay scenarios with scalable parameter variation as well as manually craft edge-case scenarios, reducing the risk that specific scenarios are not covered by the original data.
The integrated approach allows customers to work with their familiar tools on the simulation side, while taking full advantage of 3DGUT-based scene representations for visualization and camera sensor simulation purposes – with a specific focus on minimizing the domain gap between real and simulated data. Within certain limits, even the ego-vehicle trajectory and camera parameters can be adjusted. The efficient 3DGUT-based NuRec rendering approach allows real-time-capable rendering, enabling use-cases such as closed-loop HIL simulation. For SIL or open-loop HIL use cases, the quality can be further improved using the Fixer to reduce artifacts in rendered camera images.
The workflow also allows for user-defined 3D assets to be integrated as movable Gaussian splats. dSPACE AURELION already provides a large 3D asset database which can be used as a basis for this. In addition, Asset Harvester can generate 3D assets from in the wild footage, helping fill remaining content gaps. Custom assets enable users to adapt and vary the environment and visualize different traffic participants.
Figure 4: Camera parameterization and formatted sensor raw data serve as input to the NuRec pipeline to generate a digital twin, which can be extended with additional 3D assets from the dSPACE library. The scenario can be configured and simulated in ASM and is rendered in real time to test your driving function.
What about radar and lidar?
As simulation continues to evolve, the combination of data-driven development with physics-based simulation will become increasingly important. Neural reconstruction provides a scalable way to transform recorded test drives into a digital twin, with the ability to further modify driving scenarios while maintaining a high degree of real sensor data.
While NVIDIA Omniverse NuRec already provides support for lidar simulation, the workflow presented in this blog primarily focuses on camera-based use cases. As today’s driving functions also rely heavily on radar and lidar, the integration of these sensors will be essential to further extend the applicability of this simulation approach.
Stay tuned for future blog posts, where we will provide further insights into the evolution of the tool chain and the integration of additional sensor technologies into the dSPACE simulation ecosystem.
Contact dSPACE
If you are interested in integrating NVIDIA Omniverse NuRec libraries with the dSPACE simulation solutions into your test and development workflow, contact dSPACE to discuss how this approach can be integrated to your requirements.
About the Authors
Caius Seiger
Product Manager, Automated Driving & Software Solutions, dSPACE
Frederik Viezens
Project Engineer, Automated Driving & Software Solutions, dSPACE
Sven Burdorf
AI Expert, coordinates the dSPACE activities in the project, dSPACE