Fluid Composer: Fluid Detail Composition and Rendering Using Video Diffusion Models

Computer Graphics Forum
Poster Dataset Distillation (PoDD)

Abstract

We introduce a hybrid pipeline that combines classical fluid simulation with modern generative video models to produce high- quality, controllable fluid effects without implementationally difficult solvers or costly ray-tracing. First, a lightweight physics- based simulator enforces core properties like incompressibility and lets artists specify layout, boundary conditions, and source positions. Second, we render a simple 'control video' via real-time rasterisation (diffuse shading, masks, depth) to capture scene structure and material regions. Third, a text-guided diffusion transformer (e.g., VACE) treats this control video as a canvas, refining it by adding foam, bubbles, splashes, and realistic colour blending for multiple materials. Our method leverages pre- trained video generators’ implicit physical priors, while masking and noise-warping ensure precise, per-material control and seamless mixtures in latent space. Compared to purely simulation-based or generative model based text-only approaches, we avoid implementing specialised multiphase algorithms and expensive rendering passes, yet retain full artistic control over fluid behaviour and appearance. We demonstrate that this training-free strategy delivers photorealistic fluid videos, supports diverse effects (multiphase flows, transparent media and wet foams), and simplifies the artist’s workflow by unifying simulation, shading, and generative rendering in a single, extensible framework.

Video

BibTeX

@inproceedings{chen2025fluid,
      title={Fluid Composer: Fluid Detail Composition and Rendering Using Video Diffusion Models},
      author={Chen, Duowen and Lao, Zhiqiang and Guo, Yu and Yu, Heather},
      booktitle={Computer Graphics Forum},
      pages={e70300},
      year={2025},
      organization={Wiley Online Library}
    }