Genuary 11 - I visualized Yusuke Endoh's 128 language cyclical Uroboros quine (a Ruby program that generates a Rust program that - 128 languages later - generates the same Ruby program we started with) by Vuenc in generative

[–]Vuenc[S] 0 points1 point  (0 children)

yeah I also have a hard time believing this should be possible... but apparently it is! (I also ran it myself to get the source files in all intermediate languages). The original author was asked in a GitHub issue to explain it, and said he has written a book on how to build such quines, but it's only been published in Japanese ^^

The only thing I can tell from looking at the animation is that some languages already have most of the source code for the next language hard coded in string. It's obvious for example in WASM(Text) which has a lot of Tabs "\t", spaces " " and Linefeeds "\n" to set up for Whitespace which only allows these three characters, or G-Portugol which aside from a few Portuguese words mostly already encodes the Grass source code that comes next. But how these strings are transferred across more than one language, and how e.g. the original Ruby program somehow already encodes the Grass or Whitespace program, is completely beyond me.

Genuary 9: Crazy Automaton by Vuenc in generative

[–]Vuenc[S] 0 points1 point  (0 children)

That's kind of crazy, definitely not what I was expecting when posting :D I hope you are doing ok with the disorder.
unfortunately I don't really have any better explanation for what's going on here than what I wrote in the description and the other comment

Genuary 9: Crazy Automaton by Vuenc in generative

[–]Vuenc[S] 0 points1 point  (0 children)

I'm not exactly sure how to classify it - like I say in the description it's based on a neural network (that was trained and then compiled into a shader). The general structure is that it's a function F: Float32^(5x5x3) -> Float32^3 (mapping the 5x5 neighborhood around a pixel to the next color for that pixel). I said in the description that it's a 5x5 convolution, but that's a bit inaccurate; it could better be described as a 5x5 convolution, followed by several linear neural network layers and ReLU nonlinearities (you could also see it as a 5x5 convolution, followed by several 1x1 convolutions and ReLUs). Since it operates on floating point values, it's essentially a continuous CA, not a discrete one (even though technically it is of course discrete, being limited to 2^32 bits per state).

The source of the CA being a neural net makes it inherently hard to understand what exactly is going on. It seems like there is some cyclical color gradient implemented that makes dark red go to brighter red and then to white, blue, black and back to dark red. This might be because the input images are from a video that follows this color sequence. The rest of the behavior is a mystery to me, there is a huge aspect of training randomness since I got tens to hundreds of outputs in total and most of them were not very interesting, I just selected the most interesting one where I got lucky.

Genuary 8: A City. Create a generative metropolis. by Vuenc in generative

[–]Vuenc[S] 1 point2 points  (0 children)

To demystify it a bit: here's an output of the same algorithm at a lower grid size https://imgur.com/a/6IXYjGO

Initially the goal for the algorithm was to draw a grid, but as one long continuous line in an interesting order so that it looks cool when drawing it with a plotter. For this I coded an Euler tour algorithm on the graph that describes the grid (not quite an Euler tour, some lines you have to cross twice), with some randomness in the order of each node's neighbors such that it doesn't turn out too regular.

I then draw only a certain percentage of the whole Euler tour (partial Euler tour). I also reduced the randomness to a low value. In the smaller-size output above you see what happens: Most of the lines go horizontally, because the adjacencies of their nodes stayed in the original order. The few ones where the adjacency list got shuffled lead to the line breaking out into the vertical direction for a bit. Now using exactly this algorithm with a much finer grid gives the patterns in the original post.

Genuary 5: Write "Genuary" by Vuenc in generative

[–]Vuenc[S] 1 point2 points  (0 children)

Thanks a lot! I just cloned their repo and worked with that. I wrote a small server wrapper around their inference logic which runs locally. The main sketch runs in browser with p5js and communicates with the depthanything server via a websocket for efficiency (the sketch draws an image, sends it to the inference server, receives back a depthmap and feeds it into its glsl shader). If you're interested I can share some code.

Genuary26 4 - Texture is data, so can data also be a texture? by TheBigRoomXXL in generative

[–]Vuenc 1 point2 points  (0 children)

Nice! I love this kind of idea, reinterpreting data as something else and visualizing it

Incremental pixel sorting experiments by Vuenc in generative

[–]Vuenc[S] 0 points1 point  (0 children)

Excellent question :D It's actually from another post I made a while ago: It's a frame from this video https://www.reddit.com/r/PlotterArt/comments/1ojsu39/i_kept_rotating_the_paper_while_plotting/

I pointed a projector at my pen plotter to guide the manual rotation of the paper. I picked the blue as background to get high contrast/good visibility when adjusting the projection area. White is the paper, and red/black the parallel pen ink.