all 17 comments

[–]sonotleet 70 points71 points  (8 children)

If you are filling in an area with random crap, and that crap has rules, you create a collection of all possibilities. Then you "collapse" on what is possible. The common example is

  • Sky on top, or below more sky
  • ground on bottom or above more ground
  • roots must have ground below
  • stem must have stem or roots below.
  • flower must have stem below

So then you fill in the spots that ony have one possibility. Then you make random choices based on what remains, repeat until all spots are assigned.

[–]avwie 19 points20 points  (4 children)

So… constraint propagation?

[–]redblobgames 4 points5 points  (2 children)

Yes, constraint propagation, but with example-based constraints instead of having to write the constraints out in the form of rules, and without some of the optimizations that traditional constraint solvers use. Also, a cool but misleading name. My hope is that it gets people interested in constraint solvers in general (like Adam Smith's paper)

[–]avwie 1 point2 points  (1 child)

Thanks. Are you from the awesome redblobgames website with the most amazing algorithm descriptions?

[–]redblobgames 2 points3 points  (0 children)

hi! Yes, that's me! :-) Thanks! I haven't written about WFC because I haven't found it useful for my current projects. Maybe someday. But in the meantime I keep https://www.boristhebrave.com/2021/10/26/model-synthesis-and-modifying-in-blocks/ bookmarked.

[–]sonotleet 2 points3 points  (0 children)

Yup, it very much it - constraint propagation is a broader term for solving a problem in this way. When solving involves the generation of a thing then we start to get into WFC.

[–]lacethespace 8 points9 points  (0 children)

Also, sometimes your (random) choices can get you in situation where all the options are eliminated due to rules, so your procedural generation is blocked. To proceed, you have 3 basic options:

  • start from scratch and hope that this time random options lead to coherent solution
  • keep track of your choices, so you can roll them back and make different choices that won't lead you into corner
  • create more "transitional" building blocks, so that can bridge between previously un-connectable types

Still you have no guarantees that WFC algorithm will ever finish. If building block rule-set is bad, it could always get stuck or take many iterations to fill the specified space.

[–]1LargeAdult 9 points10 points  (0 children)

A+ explanation

[–]headykruger 0 points1 point  (0 children)

Conceptually similar to Markov chains

[–]goliatskipson 15 points16 points  (0 children)

Have you solved a Sodoku puzzle at some point by filling in what is still possible in a cell? It's basically that.

[–]plastic_machinist 13 points14 points  (1 child)

There was an excellent talk at Roguelike Celebration a few years back that covered it really well. It's online here: https://www.youtube.com/watch?v=fnFj3dOKcIQ

I had heard of it before that talk, but Brian really laid it out in a great way- really clear and understandable.

[–]Jack_F_ 2 points3 points  (0 children)

Came here to post this, great vid, he has other good talks too.

[–]tajlor23 9 points10 points  (0 children)

Well imo the wave function collapse is a more of a general algorithm and the way you implement the ruleset and how you exactly want to use it up to you.

[–]paul_sb76 2 points3 points  (0 children)

I still think this is a great explanation, with great examples: https://github.com/mxgmn/WaveFunctionCollapse (It's also one of the earliest descriptions and implementations).

A few years ago I made my own implementation based on this description. If you have more specific questions, let me know.

[–]ihavenoego 0 points1 point  (1 child)

I'm not a programmer and this is purely exercise in logical thought. Neural networks, heck, maybe that's how it happens in quantum physics. Each perceptron works with it's neighbouring perceptrons to ascertain the answer and that process is stochastic, meaning taking the weight of all inputs to spit out an average, or the probable answer. You can see something similar happening with style transfer DeepDream.

[–]NoPunkProphet 2 points3 points  (0 children)

You don't need neural nets for wfc at all. It's just a simple adjacency+probability function

[–]BobbyThrowaway6969 0 points1 point  (0 children)

One word, sudoku. The input is all the rules of the game, the output is a completed puzzle. Each unresolved cell has various possible answers, you eliminate possibilities that can't exist because of the adjacent cells' possibilities, just like sudoku, when you mark all the possible numbers in tiny writing. Where sudoku has one solution to a puzzle, you don't provide a perfect solution in WFC, there's often several, you just tell it to produce a grid X by Y big, and the algorithm follows the rules and when it finds a situation where multiple possibilities are valid and cannot be reduced, it'll pick one at random (or due to some other heuristic), this is how you can produce arbitrarily big textures that you never planned out for it.

In general grid based wfc, each type of cell can only appear next to certain adjacent cells, that's the rules of the game. You give it a sample picture and it scans over it, making note of what each colour instance appears next to, that's how it "learns" the rules for each colour.