In order to do this project, we have to conquer two main problems, simulation and rendering.

Studying

In the simulation, we start from the following two papers:

These two papers are based on the MPM method, adding more restrictions to create better effect. They provide some useful parameters we can reference. With the same configuration mentioned in the paper, though we produce the different results from theirs, these two papers still let us have good understanding how to tune the parameters.

Modeling

At first, for simplicity, only models in basic shape, such as sphere and cuboid, are used. We also built some 3D models with Tinkercad and 3D Builder native in Windows 10 to enrich rendering scenes. Later, we found that CloudCompare, which is an open source project for 3D cloud point and mesh processing, can sample points to fill a 3D mesh. Then more complicated models appeared in our scenes.

Simulating using material point method

MPM simulation is composed of the following steps:

  1. Rasterize particle data to the grid
  2. Compute particle volumes and densities
  3. Compute grid forces
  4. Update velocities on grid
  5. Grid-based body collisions
  6. Solve the linear system
  7. Update deformation gradient
  8. Update particle velocities
  9. Particle-based body collisions
  10. Update particle positions
In each step, the properties of grids or particles are updated. Besides, updating grids or particles is highly parallel, so we use Thrust, a C++ template CUDA library, to speed up the simulation by GPU acceleration. For more details, please refer to A material point method for snow simulation and our GitHub repo.

Viewing real-time result with OpenGL

For real-time result viewing, we use OpenGL to create a simple scene, and provide several camera views from differnet angles. Then, we bind the vertex buffer to CUDA, so that we can just use the normal OpenGL function to render the MPM particles easily.
We use this scene to check if the simulation is doing well. After we get a satisfying result, we save the particle positions of each frame for OptiX™ Ray Tracing Engine.

Voxelizing and ray tracing

From saved particle positions, we load point cloud, voxelize it, and render voxels. GVDB-Voxels library is used for rendering of sparse volumetric data. Integrating with NVIDIA OptiX, it produces high quality ray-tracing result. After all point cloud data are rendered, the saved images of each frame can be combined as video.

Generating video

We use FFmpeg to combine all the images into one video. For example, we save every frame in PNG format, and use the following command to generate video. We can also specify how many images being displayed in one second (fps).

ffmpeg -r [fps] -pattern_type glob -i '*.png' -c:v libx264 -vf "format=yuv420p" [video name].mp4