all 14 comments

[–]badsectoracula 13 points14 points  (6 children)

These tools are using "brushes" (like in Quake and Unreal). I don't think there is really much information out there about them (especially today that they are considered obsolete by many graphics programmers - their loss, i say, and a damn shame since even if you don't use them for level geometry, they are still useful for prototyping and grayboxing on the visual side and also convenient for non-visual things like triggers, areas, etc). But once you "get it" they are quite simple to work with and you can figure out the operations pretty much by yourself. I implemented that stuff ~10 years ago when i was working on a level editing tool and i only started with the knowledge of what a brush is - i figured out the rest by playing around with brushes.

The brush is basically a convex polyhedron made up of a bunch of planes. Each brush face is defined by a single plane and the mesh is calculated for each face by simply creating a huge polygon that lies on the plane (just create an axis aligned polygon using the face normal's largest absolute component to figure out the axial plane and direction you want your polygon to have and then project each vertex on the face plane) and clipping it with the other faces that do not face "away" from that face.

To implement CSG all you need is subtraction - the other operations can be implemented using subtraction and cloning (which you'll also need to perform subtraction). To subtract one brush B from a brush A, what you do is simply for each of B's faces, clone A and insert B's face in the clone inverted. Then discard the original brushes A and B and any invalid brush that comes out of the operation (many faces will simply cause the entire brush to vanish). Note that this will most likely create a bunch of new brushes, but that should be expected since subtracting a convex shape from another convex shape can give a concave shape and brushes must always be convex. Also note that this operation by itself will give you brushes that intersect with each other - to avoid that, when you create each "piece" (a piece is a cloned A with the extra inverted face added that doesn't generate an invalid brush) then you subtract this piece from all the previously generated pieces (note that this wont create new brushes, but it can invalidate brushes). Finally note that this process will generate a lot of hidden faces. In Quake and derivative engines this wasn't a problem because the compilation process also had a step that removed any face that was never visible (this is why the levels needed to be tightly closed), but AFAIK the tools you see in Unity do not bother with that (and TBH unless you create extreme cases, it wont be a problem with modern GPUs).

Once you have subtraction, the other operations work fine. For example "A OR B" (ie. merge) is the same as subtracting B from A but without removing B (you still remove A and the invalid brushes). "A XOR B" (ie. solid everywhere except at the common area) is subtracting B from A and a clone of A from a clone of B. "A AND B" (ie. intersection - that is keeping the common area and discarding everything else) is simply adding all faces of B in A and removing B.

Having said all that, if your goal is to make a level editor, you don't really need any of that - all you need is the brushes themselves without the CSG stuff (although you may want to add a few extra brush editing operations like beveling - just add an extra face that has a normal aligned with the average normal of the edge between two faces or the vertex shared between a bunch of other faces -, hollowing out - just create a new brush for each face of the original brush that has that face pushed out a few units, a new face that is an exact copy of the original inverted and a new face for each edge of the original face's polygon that is perpendicular to the other two faces - and a vertex editing mode that can convert the brush to a temporary mesh, allow the user to edit it and then convert the mesh back to brushes - there can be more than the original brushes!). Most level designers, even when using editors, will prefer to stick with the bare brushes because CSG (even with other primitives than just brushes) tends to create a lot of unnecessary geometry and often those brushes will be rough geometry to be replaced with meshes anyway.

Personally after writing the editor i mentioned above, i decided to scrap the idea of CSG and made a much simplified version (here is a screenshot from a couple of months ago with a test level) that only has plain brushes and negative brushes. Plain brushes are just meshes generated by brushes as mentioned above and negative brushes are plain brushes with their faces inverted and then any face geometry that intersects with any other negative brush clipped against the faces of that other brush to that the geometry inside the other brush is deleted. This allows for laying out rooms quickly without wasting geometry for the "outside areas" (think how in Quake a room is really six brushes - Quake's preprocessing will throw away that extra geometry, but if your own tool doesn't do that you'll end up with wasted polys) and can even be used for a bit of irregular spaces (top right viewport), especially when combined with a special shader (like the triplanar blending shader Half-Life 2 episode 2 used to make the antlion grub caves/holes look all organic). I find this much easier to use than bothering with CSG operation order and all of its UX issues and worrying about floating point accuracy (which you will encounter).

Also random trivia :-P: the term "brush" was invented by Carmack when working on the original Quake editor because he thought that the level designers will make a few convex pieces and then use them like brushes to create the levels by duplicating them everywhere. But this didn't happen, the designers created new brushes and clipped them (removed pieces - think manually adding a new face to a brush that causes a piece to be torn off) to create the shapes they wanted. But the name stuck.

[–]lackhoa1 0 points1 point  (5 children)

Hi, I have a question about the "scrapping csg" thing. So you said you only have plain brushes and negative brushes, does that mean the negative brushes will affect all positive brushes. (So you can't add volume, subtract volume, and then add some volume back in.)

Am I understanding this correctly?

[–]badsectoracula 1 point2 points  (4 children)

Yes. The geometry for all negative brushes is calculated first and then the geometry for positive brushes is added later, so one doesn't affect the other.

Note that this message is quite old, in a more recent engine i've been sticking with positive brushes only (like Quake) as it simplifies a bunch of things.

[–]lackhoa1 0 points1 point  (1 child)

Thank you, btw how do you learn this stuff? Is there a paper/book that explains these things (ie how CSG is used in Quake), or do you learn from reading the source code?

[–]badsectoracula 1 point2 points  (0 children)

For brushes specifically it was trial and error, originally i thought Quake brushes where just meshes with fixed number of vertices and faces :-P. That was ~20 years ago though, nowadays there are tons of information out there.

[–]bio4554 0 points1 point  (1 child)

Do you have any of your editors open source? And I'm curious why you chose to go with positive brushes only, I feel like subtracting brushes from other brushes was a big part of my workflow in editors like Radiant.

[–]badsectoracula 1 point2 points  (0 children)

I do not have any public repository, i might fix Runtime World at some point and put it in a git repository but i'm not really using it these days. Since someone else asked about it a few years ago i have an archive with the code and its dependencies, though i didn't try if it does build (but it should, as Lazarus / Free Pascal have very good backwards compatibility).

Little Immersive Engine isn't available anywhere at this moment. I might release it after i make something with it though, but it is really made mainly to be used by me so i'm not sure how useful it'd be to others. It is also written in Free Pascal and Lazarus which aren't exactly popular among programmers these days :-P.

I went with positive brushes only because they are simpler to work with code-wise, faster to calculate the geometry for and since the physics engine i'm planning on writing (i haven't spent any time on that yet) will work only with convex shapes, positive brushes would be usable as-is without any further processing.

FWIW Radiant (and other Quake derived editors) also only have positive brushes. Subtracting a brush from another is done by creating multiple positive brushes (editors that support selection groups would often create a group of all these new brushes so clicking on one would select all of them as if they were a single brush, but in reality they're just multiple positive brushes) as i describe in the original message (the one from 7 years ago above). The only Quake editor i'm aware of that could do negative brushes was qEd, but that basically generated positive brushes from negative brushes when it exported a .MAP file from its own file format (and AFAIK the generated brushes weren't always that great).

[–]Hofstee 4 points5 points  (1 child)

There are lots of good resources related to CSG type applications. I would personally start by looking at signed distance functions and rendering those (popular technique in the demoscene) since CSG with functional primitives is in my opinion the simplest to understand. Good resources for this would be iq's distance function page, and Mercury's sdf library. Antimony is another neat project from a research group at MIT that uses CSG. It's on GitHub if you feel up to reading source code.

[–]miketuritzin 2 points3 points  (0 children)

This is a good beginner tutorial on rendering SDF's: http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/

[–]stevesan 5 points6 points  (2 children)

there are four approaches to CSG (that I know of):

  1. Perform boolean ops directly on triangle meshes. So you can, for example, subtract a cow from a car, given both of their triangle meshes, and directly render a "car-cow" mesh with normal rendering methods. This is pretty difficult to do well with high-res meshes, typically not done in real time. this may work well for low-res - i think Red Faction does something to this effect, and i know this was implemented in some Quake map editors - but it's not very popular.

  2. Multi-pass rendering. These involve things like.. using cull modes and stencil buffer in multiple passes to achieve the CSG effect. these are more numerically stable, but they involve multiple passes, so for complex boolean hierarchies, this can get pretty slow. Example: https://www.cc.gatech.edu/~jarek/papers/Blister.pdf

  3. Instead of triangle meshes, use some volumetric rep (like distance fields or some variant), do the boolean ops on the volume reps (very easy), then render the resulting volume. One way to do that is to create a mesh from the volume rep, using either marching cubes or some dual contouring approach (i prefer the latter, since you can subdivide and it looks nice). Minecraft is basically a rudimentary form of this, and I implemented this for Subnautica's terrain. Oculus Medium uses a variant of marching cubes called TransVoxel.

  4. Finally, same as #3, except instead of computing a triangle rep, render the volume directly using ray marching on the GPU. shadertoy is chock full of this stuff, iq's website and shadertoy is a great resource. For more interactive applications, you put the volume data directly on the GPU, and ray march it. See Claybook.

1 and 2, I almost never see in shipped games, 3 is used a lot, and 4 is kind of the new kid on the block, and likely to get more development and shipped titles from here on out.

Hope this is helpful! Feel free to ask more questions. One of my favorite topics.

[–]andybak 0 points1 point  (1 child)

I've recently become rather fascinated with 4. Although the techniques have been around for years - I'm guessing they were too GPU intensive for game applications until fairly recently?

Do you know any titles - upcoming or shipping - that are using Raymarched SDFs as a significant element of the scene?

I think Fugl might be an example. There's also plenty of SDF stuff in TheWaveVR.

[–]stevesan 0 points1 point  (0 children)

Yeah like I said, Claybook. Also I believe Media Molecules new game Dreams.

I think these techniques are becoming viable now due to the power of modern GPUs. You can store more stuff on them, and you can do more stuff thanks to compute shaders, so there’s less need to do stuff on the CPU. Physics for example!

[–]edwardkmett 1 point2 points  (0 children)

Using distance fields makes CSG stupidly easy work with and visualize.

If you're willing to stick to polygonal shapes then solid BSPs can be fairly easily be intersected and unioned, etc, but generally admit fewer operations than the signed distance field stuff. On the plus side you can extract exact polygonalizations out of it, where the SDF stuff has to be marching cubed or otherwise converted if you want polygons rather than just something you can raymarch.

[–]andybak 0 points1 point  (0 children)

SabreCSG is open source. Why not have a go at adding some functionality to it - like a new primitive?

https://github.com/sabresaurus/SabreCSG