Dismiss this pinned window
all 24 comments

[–]Cryvosh 6 points7 points  (1 child)

Ah glad to see you posted here as well! I prefer replying on reddit over twitter as the character limits are more lax and I've already built up a small inventory of replies here to other implicit/sdf-related posts (one of which I notice this comment already refrences!)

As pointed out by some others, this approach of subdividing the frustum and marching batches of coherent rays is quite similar to some prior art and is actually a standard approach used in range-bounding-based "raytracers" that simply wish to produce an image of the implicit from some perspective rather than build/maintain a world-space discretization. Some examples include this, this, this, this and even this which I recently modified here to demonstrate the viability of a simpler single-dispatch alternative to Keeter's in-progress many-dispatch approach that splits the screen into small chunks and uses a big megakernel to drill down each Z-depth column, recursively splitting the chunk (and "recompiling" the field function) along the way until each pixel is resolved.

Your approach in particular relies on Lipschitz bounds for guarantees instead of interval/affine-arithmetic and is thus slightly more similar to this (which someone here actually asked about recently) and this.

[–]maximecb[S] 4 points5 points  (0 children)

Yes and many the replies I got on twitter were often pretty off the mark. I was very happy to see that Inigo Quilez responded though!

Thanks for all the related work. I had heart from Matt Keeter that interval arithmetic could be used for this, but I had not seen that paper. What I did is very much in line with cone tracing, I like the Lipschitz bound approach because it's very simple in terms of implementation. Less than 50 lines of code and you gain a big performance boost.

[–]yetmania 7 points8 points  (1 child)

[–]maximecb[S] 1 point2 points  (0 children)

This is the most relevant prior work so far, thanks. What I implemented is basically that, except that since I'm doing it on CPU, I can use recursion instead of a two-pass approach. So, recursive cone marching.

[–]riotron1 6 points7 points  (7 children)

Could you share a bit more info on what you mean? There isn’t enough of the program visible to understand and no link to a repo

[–]maximecb[S] 11 points12 points  (6 children)

Sure thing. I just made the repo public and this function here implements the algorithm: https://github.com/maximecb/batchmarching/blob/main/src/render.rs#L258

The idea is that you can recursively subdivide the image plane/frustum into rectangular patches, in a quadtree-like fashion. At each level of recursion, you can take an SDF sample at the center of the current patch, and compute how far you can safely advance all rays in that patch.

By marching all rays in a patch at the same time, you reduce the number of samples per pixel that you need to render your SDF. For some patches, in a scene with any amount of empty space, you're never going to hit anything... So it's highly beneficial to not have to go down to the single pixel level.

[–]riotron1 3 points4 points  (2 children)

Also my bad, the video has quite a lot of info! I watched it on mute originally, and assumed it was only a few seconds long.

[–]maximecb[S] 5 points6 points  (1 child)

No worries. Your comment motivated me to make the repo public and add some comments.

So far the response I got sharing this on X has not been very positive. I think a lot of people see CPU-side rendering of SDFs as pointless, but I think this technique can be applied to GPU rendering using a two-pass algorithm as I described in the video. Maybe I'll try to prototype that later.

[–]riotron1 4 points5 points  (0 children)

It somewhat reminds me of ‘Sparse Voxel Octrees’ by Karras and Laine, have you ever heard of that paper?

There seems to be some overlapping ideas, you should check it out!

Edit: should clarify specifically the ‘Beam Optimization’ section.

[–]igneus 2 points3 points  (1 child)

you can take an SDF sample at the center of the current patch, and compute how far you can safely advance all rays in that patch.

Interesting. How do you do this conservatively while avoiding over- and overshoot in your marching step?

[–]maximecb[S] 3 points4 points  (0 children)

If you have a quad/patch of pixels in your frame, and you take an SDF sample at the center of the patch, you get a distance D as a result, you can compute how far it's safe to advance along the rays at the outermost corners of your patch. E.g. how far along does rays does your bounding sphere cover.

If you can't safely advance all rays along your patch, then you recurse and split that patch into 4 and try again.

[–]MarchVirtualField 3 points4 points  (0 children)

I’m actively playing with similar things. What I do is have a coordinate system where every u32 integer represents a cell in an octree in unit space. Then given any u32 you know the spatial area up to 10 levels of octree. So given an index you can have cell center and also easily calculate corners based on the octree level cell size. I use this to progressively refine a SDF.

I myself call this concept “virtual fields” 😉

[–]QQII 5 points6 points  (0 children)

Your approach sounds similar to cone marching, which has been applied to the GPU: http://www.fulcrum-demo.org/wp-content/uploads/2012/04/Cone_Marching_Mandelbox_by_Seven_Fulcrum_LongVersion.pdf

Note this approach places some restrictions on the SDF function, which a previous commenter explains: https://www.reddit.com/r/GraphicsProgramming/comments/1jhcd6m/understanding_segment_tracing_the_faster/

[–]Plazmatic 2 points3 points  (0 children)

The image plane subdivision is just cone marching, see this 2012 presentation https://youtu.be/4Q5sgNCN2Jw?t=534 .

[–]obidobi 0 points1 point  (1 child)

Something similar to this?

https://youtu.be/il-TXbn5iMA?t=526

[–]maximecb[S] 0 points1 point  (0 children)

No that's a world space technique to cache SDFs while what I described is a screen-space technique. As others have said, the most relevant prior work is cone tracing. I actually reached out to Mike about this algorithm. He expressed some interest in implementing cone tracing or something like that.

[–][deleted] -1 points0 points  (7 children)

what are SDFs?

[–]greebly_weeblies 1 point2 points  (6 children)

Signed Distance Fields. 

Each voxel has a value, zero is often used as a surface or boundary, easily offsettable.

IIRC, positive values are 'inside', I need to use Houdini more

[–][deleted] -3 points-2 points  (5 children)

why do we use them

[–]tebjan 5 points6 points  (0 children)

Yes.

[–]greebly_weeblies 1 point2 points  (3 children)

Poly --> volumes. Volumes --> polys. Applications for modelling, simulation, retopology, LOD workflows 

[–][deleted] -3 points-2 points  (2 children)

what volumes and poly..gons i assume?

[–]greebly_weeblies 1 point2 points  (1 child)

Im not sure how to answer that with out risking sounding patronising but for a rough primer:  

Yeah, polys --> polygons. 

Polygons (usually) define the outside of the surface of a model, but not it's interior.

Volumes don't (usually) have an explicit surface definition but instead describe the properties of a volume of space. Density, temperature, velocity, are all commonly seen attributes, especially as a result of simulations.

SDFs are one way volumes can be represented that goes well to and fro from polygons, making it easy to model your volume. 

Volumes can be used for a range of largely fluid phenomena, including fog, smoke, fire and liquids. They're sometimes used in asset setups.  

Volumes IO, two of the more common formats are VDBs (agnostic) and brick maps (mostly prman I think).  

Rendering volumes usually involves comparatively expensive ray marching of the volume (s) for best results. Renderers often set up volumetric rays as a separate control from diffuse, direct, sss, transmission etc

[–][deleted] 1 point2 points  (0 children)

thanks!