In order to do this project, we have to conquer two main problems, simulation and rendering.
Studying
In the simulation, we start from the following two papers:
- Multi-species simulation of porous sand and water mixtures
- A material point method for snow simulation
These two papers are based on the MPM method, adding more restrictions to create better effect. They provide some useful parameters we can reference. With the same configuration mentioned in the paper, though we produce the different results from theirs, these two papers still let us have good understanding how to tune the parameters.
Modeling
At first, for simplicity, only models in basic shape, such as sphere and cuboid, are used. We also built some 3D models with Tinkercad and 3D Builder native in Windows 10 to enrich rendering scenes. Later, we found that CloudCompare, which is an open source project for 3D cloud point and mesh processing, can sample points to fill a 3D mesh. Then more complicated models appeared in our scenes.
Simulating using material point method
MPM simulation is composed of the following steps:
- Rasterize particle data to the grid
- Compute particle volumes and densities
- Compute grid forces
- Update velocities on grid
- Grid-based body collisions
- Solve the linear system
- Update deformation gradient
- Update particle velocities
- Particle-based body collisions
- Update particle positions
Viewing real-time result with OpenGL
For real-time result viewing, we use OpenGL to create a simple scene, and provide several camera views from differnet angles. Then, we bind the vertex buffer to CUDA, so that we can just use the normal OpenGL function to render the MPM particles easily.
We use this scene to check if the simulation is doing well. After we get a satisfying result, we save the particle positions of each frame for OptiX™ Ray Tracing Engine.
Voxelizing and ray tracing
From saved particle positions, we load point cloud, voxelize it, and render voxels. GVDB-Voxels library is used for rendering of sparse volumetric data. Integrating with NVIDIA OptiX, it produces high quality ray-tracing result. After all point cloud data are rendered, the saved images of each frame can be combined as video.
Generating video
We use FFmpeg to combine all the images into one video. For example, we save every frame in PNG format, and use the following command to generate video. We can also specify how many images being displayed in one second (fps).
ffmpeg -r [fps] -pattern_type glob -i '*.png' -c:v libx264 -vf "format=yuv420p" [video name].mp4