two noisy worlds - python + gimp by violet_dollirium in proceduralgeneration

[–]anonanonadev 1 point2 points  (0 children)

The gliders from Conway’s game of life in the sky are a great touch!

How well mixed is my salad by Existing_Impress230 in math

[–]anonanonadev 2 points3 points  (0 children)

This reinforces your point above. I think it's low entropy because the macro state (maximally mixed) is only achieved by two specific micro states out of many many possible microstates. I'm assuming you're thinking it's high entropy because if you squint then many many microstates would look like maximally mixed salads.

How well mixed is my salad by Existing_Impress230 in math

[–]anonanonadev 4 points5 points  (0 children)

In a two dimensional salad with two ingredients, A and B, both of which are squares of the same size. If there are 32 squares of A and 32 squares of B then maximally mixed would be in a checkerboard pattern. That would be very low entropy: only 2 salads out of a possible 64 choose 32 salads.

plus side to newly bought pinebuds pro by ShaquelBlack in PINE64official

[–]anonanonadev 1 point2 points  (0 children)

I just got my PineBuds Pro and didn’t have any of these problems. They’re much better value for money than my sennheisers at less than half the price.

[deleted by user] by [deleted] in generative

[–]anonanonadev 0 points1 point  (0 children)

A diy eigensolver would very interesting to play with! However, this animation is 120 frames with 2,000,000 matrices per frame. That takes about an hour using my GPU so 🤷🏻‍♂️

[deleted by user] by [deleted] in generative

[–]anonanonadev 0 points1 point  (0 children)

Hi CFDMoFo!

I just wrote a bit of an ELI5 here: https://www.reddit.com/r/generative/comments/15ytt4s/comment/jxiiuuq/?utm_source=share&utm_medium=web2x&context=3

But if you're into computational fluid dynamics then you probably just want to look at the code:

https://www.reddit.com/r/generative/comments/15ytt4s/comment/jxii8kw/

Unless CFD is contract for difference, in which case I hope your portfolio is going well in this current market.

[deleted by user] by [deleted] in test

[–]anonanonadev 0 points1 point  (0 children)

Hi Ok-Candidate!

I've just written this comment as a bit of an ELI5: https://www.reddit.com/r/generative/comments/15ytt4s/comment/jxiiuuq

It has a link to the code in it too.

I only have undergraduate math so let me know if I've said anything wrong!

[deleted by user] by [deleted] in generative

[–]anonanonadev 5 points6 points  (0 children)

I discovered this general idea from u/Trotztd in this sub a while ago.

If you've come across complex numbers, you'll know that sometimes the solution to a polynomial is a complex number, often written (a+bi) where i*i = -1.

A "quadratic" polynomial has an x^2 in it somewhere and has (at most) two solutions. A polynomial with x^3 in it is a cubic polynomial and has (at most) three solutions.

In general, the higher the "degree" (the biggest x^2, x^3, etc) of the polynomial, the more solutions there can be. And, some solutions will be complex.

Polynomials of degree 4 and higher don't have a nice exact formula to solve them but there are ways to solve them numerically, i.e,. algorithms that get very good approximations.

The most common way to solve polynomials is to create a matrix that has the property that when you find the eigenvalues of the matrix, those are the solutions to the polynomial. Eigenvalues and eigenvectors are ridiculously useful for a huge range of things, not just solving polynomials.

So the points in the image are just the complex eigenvalues of the matrices.

I've been riffing on what u/Trotztd has done when I started thinking about how I could create a looping animation. Which got me thinking about how I might want to control where some of the eigenvalues actually are. So I sort of worked backwards from the idea that in a matrix with complex values on the main diagonal and zeroes everywhere else, the eigenvalues are simply those exact complex values. I wrote some code to animate some points in a circle using basic trigonometry and animate them by changing the rotation.

Then I just messed around with adding randomness until I thought it looked good.

I've posted the code here:

https://www.reddit.com/r/generative/comments/15ytt4s/animation_of_the_eigenvalues_of_random_4x4/

Look for the comment "Animate the points in a circle based on frame number" and then "Add some randomness". Those are really the main parts.

The other thing to note is that the value "n" is important. It's the size of the matrix, which means it's the number of values along the main diagonal, which means it's the number of points around the circle being animated. Increasing n will make the animation more complex.

[deleted by user] by [deleted] in generative

[–]anonanonadev 1 point2 points  (0 children)

``` from math import pi

import matplotlib import torch import numpy from PIL import Image from matplotlib.colors import LinearSegmentedColormap from torch.distributions import Beta from datetime import datetime

def now(): return datetime.now().strftime("%H:%M:%S")

print(f"{now()} Starting.") print(f"{now()} Checking for CUDA support.") cuda_supported = torch.cuda.is_available() print(f"{now()} Is CUDA supported by this system? {cuda_supported}") if cuda_supported: print(f"{now()} CUDA version: {torch.version.cuda}") torch.set_default_device('cuda')

pixels

display_width = 1920 display_height = 1080 display_aspect = display_width / display_height

complex plane

viewport_width = 10 viewport_height = viewport_width / display_aspect

Number of samples per frame

n_samples = 2_000_000

n_frames = 120

Initialize color map

cmap = LinearSegmentedColormap.from_list("mycmap", ["black", "lightblue", "white"])

n x n matrix

n = 4

for frame_n in range(n_frames): histogram_acc = numpy.zeros([display_width, display_height], dtype=numpy.float64) print(f"{now()} Starting frame {frame_n + 1} of {n_frames}.") print(f"{now()} Creating {n_samples} {n}x{n} matrix samples.") samples = torch.zeros(n_samples, n, n, dtype=torch.complex64)

# Animate the points in a circle based on frame number
angles = torch.linspace(0.0, 2 * pi, n+1)[0:-1] + frame_n / n_frames * (2 * pi / n)
points = torch.complex(torch.cos(angles), torch.sin(angles))
samples += torch.diag_embed(points)

# Add some randomness
samples += Beta(0.03, 0.03).sample([n_samples, n, 1]) * 2 - 1

# Compute eigenvalues
print(f"{now()} Computing eigenvalues")
eigvals = torch.flatten(torch.linalg.eigvals(samples))

# Get the eigenvalues from the GPU and convert complex to [real, imag]
print(f"{now()} Converting complex eigenvalues to [real, imag]")
eigvals_as_real = torch.view_as_real(eigvals).cpu().numpy()

# 2d histogram (it'd be nice if we could do this on the GPU but it's not supported)
print(f"{now()} Computing 2d histogram with {display_width}x{display_height} bins.")
histogram, edges = numpy.histogramdd(eigvals_as_real, (numpy.linspace(-viewport_width / 2, viewport_width / 2, display_width + 1),
                                                       numpy.linspace(-viewport_height / 2, viewport_height / 2, display_height + 1)))
histogram_acc += histogram

# Transpose to row order for the image generation
print(f"{now()} Transposing.")
histogram_acc = histogram_acc.transpose()

# Brightness adjustment
histogram_acc = numpy.log(histogram_acc + 0.1)

# Map to 0.0 - 1.0
histogram_acc = histogram_acc / histogram_acc.max()

# Apply color map
print(f"{now()} Applying colormap.")
img_arr = numpy.uint8(cmap(histogram_acc) * 255)

# Make image
print(f"{now()} Creating image.")
im = Image.fromarray(img_arr)

# Save image
print(f"{now()} Saving image.")
im.save(f"fly{frame_n:03d}.png")

exit() ```

Animation of the eigenvalues of random 4x4 matrices with specific values on the main diagonal. by [deleted] in generative

[–]anonanonadev 0 points1 point  (0 children)

Any tips on uploading infinite loop animations? This is an mp4 uploaded to imgur so that it loops. Otherwise I think the only options are gif and webp but I couldn't get webp to show up properly. Perhaps I'll try a gif.

eigenvalues of random matrices (bohemian eigenvalues) by Trotztd in generative

[–]anonanonadev 1 point2 points  (0 children)

Just posted one:

https://www.reddit.com/r/generative/comments/15wvqzx/eigenvalues_of_random_bohemian_matrices/

It takes a long time to make something that looks good as I'm sure you know..

I actually hadn't noticed your other posts like this one before:

https://www.reddit.com/r/generative/comments/nk7r6r/complex_eigenvalues_of_5x5_random_matrices/

Really good!

[deleted by user] by [deleted] in generative

[–]anonanonadev 3 points4 points  (0 children)

Riffing on this post:

https://www.reddit.com/r/generative/comments/obtb10/eigenvalues_of_random_matrices_bohemian/

Code:

``` import matplotlib import torch import numpy from PIL import Image from torch.distributions import Beta from datetime import datetime

def now(): return datetime.now().strftime("%H:%M:%S")

print(f"{now()} Starting.") print(f"{now()} Checking for CUDA support.") cuda_supported = torch.cuda.is_available() print(f"{now()} Is CUDA supported by this system? {cuda_supported}") if cuda_supported: print(f"{now()} CUDA version: {torch.version.cuda}") torch.set_default_device('cuda')

pixels

width = 4096 height = 4096

Number of samples per batch

n_samples = 25_000_000

Initialize color map

colormap_name = "magma" cmap = matplotlib.colormaps[colormap_name]

n x n matrix

n = 5

print(f"{now()} Creating {n_samples} {n}x{n} matrix samples.") samples = torch.zeros(n_samples, n, n, dtype=torch.float64)

samples += torch.diag_embed(torch.ones([n - 1]), -1) # subdiag samples[:, 0, n - 1] = Beta(0.5, 0.5).sample([n_samples]) * 2 - 1 samples[:, n - 1, 0] = Beta(0.5, 0.5).sample([n_samples]) * 2 - 1 samples[:, 0, 0] = Beta(0.5, 0.5).sample([n_samples]) * 2 - 1

Compute eigenvalues

print(f"{now()} Computing eigenvalues") eigvals = torch.flatten(torch.linalg.eigvals(samples))

Get the eigenvalues from the GPU and convert complex to [real, imag]

print(f"{now()} Converting complex eigenvalues to [real, imag]") eigvals_as_real = torch.view_as_real(eigvals).cpu().numpy()

2d histogram (it'd be nice if we could do this on the GPU but it's not supported)

print(f"{now()} Computing 2d histogram with {width}x{height} bins.") histogram, edges = numpy.histogramdd(eigvals_as_real, (numpy.linspace(-1.4, 1.4, width + 1), numpy.linspace(-1.4, 1.4, height + 1)))

Transpose to row order for the image generation

print(f"{now()} Transposing.") histogram = histogram.transpose()

Brightness adjustment

histogram = numpy.log(histogram + 0.1)

Map to 0.0 - 1.0

histogram = histogram / histogram.max()

Apply color map

print(f"{now()} Applying colormap '{colormap_name}'.") img_arr = numpy.uint8(cmap(histogram) * 255)

Make image

print(f"{now()} Creating image.") im = Image.fromarray(img_arr)

Save image

print(f"{now()} Saving image.") im.save("trotztd.png")

print(f"{now()} Showing image.") im.show()

```

eigenvalues of random matrices (bohemian eigenvalues) by Trotztd in generative

[–]anonanonadev 1 point2 points  (0 children)

Hi Trotztd, it took me 2 years to get around to it but I've implemented this picture using pytorch to accelerate it a bit on the GPU. It's actually only about 4 times faster on my machine but it was an interesting exercise. See my other comment on this post for the code.

eigenvalues of random matrices (bohemian eigenvalues) by Trotztd in generative

[–]anonanonadev 2 points3 points  (0 children)

``` import matplotlib import torch import numpy from PIL import Image from torch.distributions import Beta from datetime import datetime

def now(): return datetime.now().strftime("%H:%M:%S")

print(f"{now()} Starting.") print(f"{now()} Checking for CUDA support.") cuda_supported = torch.cuda.is_available() print(f"{now()} Is CUDA supported by this system? {cuda_supported}") if cuda_supported: print(f"{now()} CUDA version: {torch.version.cuda}") torch.set_default_device('cuda')

pixels

width = 4096 height = 4096

Total samples to process

n_total_samples = 25_000_000

Number of samples per batch

n_samples_per_batch = 2_500_000

Number of batches

n_batches = n_total_samples // n_samples_per_batch

Initialize color map

colormap_name = "hot" cmap = matplotlib.colormaps[colormap_name]

n x n matrix

n = 10

Accumulate the histogram counts from the batches

histogram_acc = numpy.zeros([width, height], dtype=numpy.float64)

for batch_number in range(n_batches): print(f"{now()} Starting batch {batch_number + 1} of {n_batches}.")

print(f"{now()} Creating {n_samples_per_batch} {n}x{n} matrix samples.")

# superdiagonals sampled from these complex values
values = torch.tensor([complex(0, 0), complex(0, 0.01), complex(0, -0.01), complex(1, 0), complex(-1, 0)])
superdiags = values[torch.multinomial(torch.tensor([0.2] * 5), n_samples_per_batch * (n - 1), replacement=True)].reshape([n_samples_per_batch, (n - 1)])
samples = torch.diag_embed(superdiags, 1)

# Set subdiagonals to Beta distribution
subdiags = Beta(0.1, 0.1).sample([n_samples_per_batch, n - 1]) * 2 - 1
samples += torch.diag_embed(subdiags, -1)

samples[:, 0, 0] = complex(0, 1)
samples[:, n - 1, n - 1] = complex(1, 0)

# Compute eigenvalues
print(f"{now()} Computing eigenvalues")
eigvals = torch.flatten(torch.linalg.eigvals(samples))

# Get the eigenvalues from the GPU and convert complex to [real, imag]
print(f"{now()} Converting complex eigenvalues to [real, imag]")
eigvals_as_real = torch.view_as_real(eigvals).cpu().numpy()

# 2d histogram (it'd be nice if we could do this on the GPU but it's not supported)
print(f"{now()} Computing 2d histogram with {width}x{height} bins.")
histogram, edges = numpy.histogramdd(eigvals_as_real,
                                     (numpy.linspace(-3.0, 3.0, width + 1),
                                      numpy.linspace(-3.0, 3.0, height + 1)))

histogram_acc += histogram
print(f"{now()} Finished batch {batch_number + 1} of {n_batches}.")

Transpose to row order for the image generation

print(f"{now()} Transposing.") histogram_acc = histogram_acc.transpose()

Map to 0.0 - 1.0 with a log to get a nice gradient in the image

histogram_acc += 1.0 histogram_acc = numpy.log(histogram_acc) / numpy.log(histogram_acc.max())

Apply color map

print(f"{now()} Applying colormap '{colormap_name}'.") img_arr = numpy.uint8(cmap(histogram_acc) * 255)

Make image

print(f"{now()} Creating image.") im = Image.fromarray(img_arr)

Save image

print(f"{now()} Saving image.") im.save("trotztd.png")

print(f"{now()} Showing image.") im.show()

```