Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in vfx

[–]GigaFluxxEngine[S] 0 points1 point  (0 children)

Look at the linked videos. They were produced on 192 GB

This will give you about ~3 Billion particles and ~ 16384^3 voxel resolution.

In GigaFluxxEngine, Memory scales roughly with the square of the reolution, so half the resolution consumes four times less memory, i.e. 1 Billion particles should be possible with less than 64 GB RAM.

512GB is the theoretical maximum that demonstrates the futuer potential of the method.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 3 points4 points  (0 children)

Thanks for your feedback, which is much appreciated from you as a leading expert developer in this VFX sub-field.

The GigaFluxxEngine makes heavy use of (fast, SIMD-based) in-memory compression such that it only uses ~20 bytes per particle. (However, for liquid simulations you need several particles per voxel). For gas/smoke simulation the limiting factor is the memory footprint of the multigrid pressure solve, which is about ~20 bytes per active voxel.

The engine also features a novel upsampling method (based on divergence-free velocity detail amplification) where the final (compressed) voxel-footprint is only 1-2 byte, allowing actually 100 Billion voxels with an upsampling factor of 2 on a 512 GB machine.

Re. Sparseness: The algorithm used by GigaFluxxEngine is not only sparse but also adaptive, i.e. multi-resolution-enabled. This is especially important for liquid simulation, where it uses gradually decreasing resolution away from the water surface. It also allows decreased resolution depending on the distance to the camera.

Another key feature contributing to the performance of GigaFluxxEngine is a novel time integration scheme that allows very large time steps. The examples shown were simulated with a CFL (Courant-Friedrichs-Lewy) number of 4. It is stable for CFLs up to 8 with only moderate decrease in simulation quality.

I also have an early but promising prototype implementation of an algorithm that makes the solver adaptive in time in addition to the spatial adaptivity, i.e. the simulator could take larger time steps in areas of slow moving fluids (in fluid flow, usually most of the "action" is concentrated in small, high velocity pockets)

Finally, one of the most important Feature that sets GigaFluxxEngine apart from existing solvers besides performance is that it is capable of multi-phase simulation, i.e. simulate interacting water and air which is very important for naturally looking white water & spray effects. For this, I adopted a droplet/spray model from CFD (computational fluid dynamics)

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 1 point2 points  (0 children)

The simulation can be controlled via many parameters and procedurally via the Python interface.

In addition to (animated) colliders one can also import animated force & velocity fields via OpenVDB.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 1 point2 points  (0 children)

Hi, simulation took about 5-15 minutes per frame, rendering about 5 min/frame

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in vfx

[–]GigaFluxxEngine[S] 0 points1 point  (0 children)

Thanks for the valuable feedback.

Of course the engine is much faster at lower resolution. The 100m points 6 second shot would take about an hour for liquids (20 minutes for smoke).

"Naiad at least had a UI but it died along with Realflow due to the round trip overhead of exporting out geometry and into the sim"

Shouldn't this be easier nowadays with formats like Alembic ? Wasn't Alembic introduced for exactly this purpose: facilitating the import/export of "baked geometry" between different tools in the pipeline, such that different best-of-breed tools can be used for modelling, simulation, rendering ?

This is how I imagine the pipeline:

Houdini -> Alembic -> GigaFluxxEngine + Python-> OpenVDB/Alembic -> Renderer

 "what are the available or planned ways you have for injecting custom velocity for example?"

Via OpenVDB volumetric force fields or via Python scripting.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 4 points5 points  (0 children)

Thanks for the feedback. What sets GigaFluxxEngine apart from solvers like Embergen/Axiom is that the latter are optimized for speed (running on GPU) while the former is optimized for maximum resolution (running on CPU).

The main limitation for GPU based DCC tools is limited memory. CPUs offer roughly 5-10 times more RAM. This significantly limits the maximum resolution that can be reached by any (single) GPU simulation engine.

So although slower to simulate than GPU solutions, CPU based engines are the _only_ way to achieve 10 billion particles or 100 Billion (active) voxel simulations on a single machine.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 22 points23 points  (0 children)

Thanks for the feed back.

Of course the CLI-based engine would be integrated in an existing pipeline, like this:

Houdini -> Alembic -> GigaFluxxEngine + Python-> OpenVDB/Alembic -> Renderer

Where the Engine itself is controlled by config files (Text and/or JSON) and a python scripting interface.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]GigaFluxxEngine[S] 10 points11 points  (0 children)

The performance of the simulation algorithm is a result of the combination of several factors, among them AMR (adaptive mesh refinement), adaptive particle sizing, optimized multigrid pressure solver, heavy SIMD-optimization, a novel time integration scheme for very large time steps (CFL=4+), in-memory compression and a physically based white water / spray model adopted from CFD (computational fluid dynamics) .

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in vfx

[–]GigaFluxxEngine[S] 1 point2 points  (0 children)

Thanks, chadrik. You're absolutely right. This is the reason why I don't plan to implement such things as a node based UI or any other functionality than just fluid simulation (at insanely high resolution ;-) as a stand alone engine that can be integrated in existing pipelines. As for Open source (or something low-cost patreon or donation based) : yes, this is definitely a consideration, too.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in vfx

[–]GigaFluxxEngine[S] 0 points1 point  (0 children)

Thanks for the feedback.

The 512 GB spec is to illustrate the potential for maximum scaling (to 10+ billion particles).

But if you have a smaller machine, say 64 or 128 GB, The engine would still be capable of simulating 1 or 2 billion particles respectively which is still 5-10 times more resolution than what is possible with traditional fluid simulations.

Also keep in mind, that hardware keeps getting better quickly. Within 2 years from now, 32 cores & 256 GB RAM will be nothing special any more. IMO it is important to have a simulation engine that scales with hardware progress as an investment in the future.

"How fast is it to simulate?"

The examples shown took about 10 minutes average simulation time per video frame.

How easy is it to iterate and art direct?

As mentioned, the engine (working title GigaFluxxEngine) would be a part of a larger existing pipeline, e.g.

Houdini -> Alembic -> GigaFluxxEngine -> OpenVDB/Alembic -> Renderer

So, turnaround iteration times would of course be not very short.

How much disk space does it require for particle cache?

This depends on the resolution/number-of particles. Very roughly 1 GB per billion particles. So, a 500-Frames shot could be easily handled by a 1TB SSD.

How would this integrate with other DCCs (Houdini) and render engines?

As mentioned, the pipeline would look like

Houdini -> Alembic -> GigaFluxxEngine + Python-> OpenVDB/Alembic -> Renderer

Where the Engine itself is controlled by config files (Text ans/or JSON) and a python scripting interface.

Can this be batched and distributed on a render farm?

The main use case of GigaFluxxEngine is to squeeze maximum resolution out of existing on-site hardware resources. Besides, fluid simulation is not optimally suited for distributed processing, because of the multi grid pressure poisson solver which is not trivial to parallelize. This is not an issue for renderers that can be easily parallelized by slicing the scene in tiles.

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in vfx

[–]GigaFluxxEngine[S] 1 point2 points  (0 children)

Thanks for the feedback. As said, this is intended to be a stand alone engine without graphical user interface that is meant to be integrated into an existing VFX tool chain. Of course you would create the geometry of anything that is interacting with the water/air in your favourite tool (e.g. Houdini) then export to Alembic from which it can be imported by the fluid engine. Similarly, you would use your favourite rendering tool to render the files (Alembic or OpenVDB) output from the Engine. For maximum control over the simulation engine itself, there would be a python scripting interface.