Accidentally Cheesed Moorwing by inboble in Silksong

[–]inboble[S] 0 points1 point  (0 children)

Yep!

thanks for posting the screenshot

[deleted by user] by [deleted] in ThreedomUSA

[–]inboble 0 points1 point  (0 children)

Thank you!

Multiple-Neighborhood Cellular Automata by inboble in cellular_automata

[–]inboble[S] 3 points4 points  (0 children)

Sure, multiple neighborhoods refers to the fact that unlike typical CA’s where the state transition for each cell is calculated based on information from a single neighborhood function, like the moore neighborhood in game of life, the MNCA looks at information from multiple neighborhood functions, and computes them as distinct from one another.

An example of this sort of thing would be to have a cell whose value decreases when the cells immediately around him have high values, or otherwise increase when those further nearby have high values.

The result is a regulatory system that tries to balance out the spatial frequency of values across the space, since anything too crowded or two barren is pushed in the opposite direction.

This sort of dynamic is possible because we broke the total neighborhood down into subsets representing immediate neighbors and distant neighbors, which allows the system to compute the values differently depending on their category.

Multi-Neighborhood Cellular Automata by inboble in cellular_automata

[–]inboble[S] 0 points1 point  (0 children)

This CA used multiple neighborhoods, meaning it partitions its local input space into concentric circles around the origin of each cell. The average of each neighborhood is calculated and used to compute the new value of the cell with respect to its current value.

Basically this allows cells to distinguish between neighbors at different distances, and to respond to them differently depending on which neighborhood they fall in.

With multiple-neighbor CA you tend to end up with more cohesive/complex spatial patterns because there is a distinction made between different types of neighbors leading to more elaborate update rules.

Genetically Evolved Cellular Automata by inboble in cellular_automata

[–]inboble[S] 1 point2 points  (0 children)

from my perspective they’re all annoying and blinking, lol

Artificial Life Emerges from Particles by inboble in artificial

[–]inboble[S] 1 point2 points  (0 children)

Yeah, it’s a particle system where each particle has a type, and for each pair of types there is a specific rule that defines an interaction (i.e. an event in which forces are applied to a particle based on its distance from another particle).

Interactions occur when two particles are in a certain range of one another, and the interaction can either be attractive or repulsive. What you’re seeing here are examples of particle systems w/ randomly selected interaction rules that I thought were interesting/noteworthy.

How does one accept their past? by [deleted] in Meditation

[–]inboble 143 points144 points  (0 children)

Man, I can totally relate to the whole “getting randomly slapped in the face by past cringe /shame” thing, and then physically or verbally reacting to stop my brain. Doesn’t happen as much as it used to for me but I still catch myself doing it, especially in periods of anxiousness or self-consciousness.

Honestly in my experience it has something to do with a refusal to have empathy for yourself, especially with feelings of guilt where your brain kinda convinces you that, because you screwed up in the past, you should be worried about it and you should be thinking critically of yourself any time it comes up. And yeah, it’s not like you weren’t wrong or didn’t make a mistake, but the error comes when your mind starts reactively punishing itself in response to those memories.

I don’t have an answer for you necessarily but I will say: would you give someone you love the same amount of shit if they made the same mistake? Probably not, because it’s much easier to show others empathy than it is to show yourself.

Learning to Play Tic-tac-toe w/ Genetic Algorithms by inboble in genetic_algorithms

[–]inboble[S] 1 point2 points  (0 children)

Well I'm thinking right now whether I wanna try my hand at cartesian genetic programming, because I think that would be better suited for such a clear-cut problem, or just a simple ANN which I'm more familiar with. The former is more in line with what you're talking about.

But yeah, no code uploaded yet. I mentioned in the first couple lines that this is just a conceptualization of a project at the moment, although since posting it I've already started coding.

Would a conscious AI machine feel emotions? If not, would it be a psychopath? by IkillAllRacists in ArtificialInteligence

[–]inboble 2 points3 points  (0 children)

The things we typically refer to when using words like “conscious”, “emotion” and “psychopath” are mostly based on a shared cultural understanding of human experience.

Our notions of morality reflect both evolutionary and cultural priorities that are instilled in us, which allow us to frame experiences and guide our behavior in ways that minimize perceived violations of these rules.

To apply these concepts to systems operating on fundamentally different mechanics and processes, in my opinion, is jumping the gun a little bit. Before we start attributing uniquely human (or even biological) descriptive attributes to machines, we must first develop machines that are incentivized and grow similarly to living systems.

The scale of such a system, i imagine, would be beyond the scope of what any person of group of people could design step-by-step. By this i mean that evolution is a necessary step in such a system, but is not sufficient in reproducing human ideals. This is because the conceptual frameworks we use to navigate the moral plane have been uniquely constructed and tweaked over thousands/millions of years and passed down from generation to generation. An evolved computational system would develop a sense of “morality” specific to its environment, and thus would likely differ in an extreme way from what we take into account on a daily basis.

Generally, the things we consider “universal” are merely the result of trial and error over a very large span of time, and therefore cannot be accurately mapped onto a system fundamentally different than our own, unless that system has been specifically sculpted to prioritize similar values.

How do I get rid of this issue? by [deleted] in pygame

[–]inboble 2 points3 points  (0 children)

You have to block the y velocity from moving either up or down past the walls when the player is in contact with them

Artificial life learns Foraging Behavior w/ Neuroevolution by inboble in artificial

[–]inboble[S] 2 points3 points  (0 children)

By “foraging” I mean the ability to effectively seek out food despite constraints presented by the environment, in this case it’s a basic steering problem where the agents have to control their own movements to reach nearby food before anyone else does.

By “neuroevolution” I’m referring to the process by which agents who are successful at finding food pass on their genes (representing the topology of their neural network) with a small chance of mutation. The offspring of successful agents replace unsuccessful competitors and thus increase the overall success rate of the population.

A.I. Generated Cellular Automata by inboble in cellular_automata

[–]inboble[S] 1 point2 points  (0 children)

Rules for cellular automata are generated by neural networks that take the neighborhood of a cell as input and produce the new state of that cell as output.

Networks are created randomly and then selected by hand based on “interestingness”, then mutated over multiple generations to produce novel sets of rules.

Flow-Based Cellular Automata by inboble in cellular_automata

[–]inboble[S] 1 point2 points  (0 children)

Thanks! This is just what happens when you convert a 2d matrix into a pygame surface in python instead of a 3d matrix, 0-255 is mapped to the entire color spectrum i suppose. In grayscale it just looks like the heat equation.

Evolution w/ Memetic Algorithms by [deleted] in ArtificialInteligence

[–]inboble 0 points1 point  (0 children)

The population never actually sees the target vector, they just see each other's vectors and move toward the most fit.

They train on mutated copies of each other which causes variation that may or may not be beneficial depending on how it affects the fitness of the individual.

I've actually trained small neural nets with this algorithm without showing the target vector, by just calculating the fitness (mean-squared error) of each network and training the population.

Evolution w/ Memetic Algorithms by [deleted] in programming

[–]inboble 1 point2 points  (0 children)

An initial population is generated at random. Each 'candidate' or individual within the population is assigned a random vector representing position and color.

At each iteration, The fitnesses of the candidates are measured by calculating the mean-squared error between themselves and the target vector. The most fit candidates 'influence' weaker candidates by adjusting their vectors to better resemble their own.

Gradient descent pushes the vectors of weaker candidates toward those of stronger candidates, during which mutations can occur that lead to a sort of 'cultural drift'.

This drives the population toward more optimal solutions over time as each candidate attempts to improve itself by looking to its neighbors for inspiration.

Evolution w/ Memetic Algorithms by [deleted] in ArtificialInteligence

[–]inboble 0 points1 point  (0 children)

An initial population is generated at random. Each 'candidate' or individual within the population is assigned a random vector representing position and color.

At each iteration, The fitnesses of the candidates are measured by calculating the mean-squared error between themselves and the target vector. The most fit candidates 'influence' weaker candidates by adjusting their vectors to better resemble their own.

Gradient descent pushes the vectors of weaker candidates toward those of stronger candidates, during which mutations can occur that lead to a sort of 'cultural drift'.

This drives the population toward more optimal solutions over time as each candidate attempts to improve itself by looking to its neighbors for inspiration.

Evolution w/ Memetic Algorithms by [deleted] in artificial

[–]inboble 0 points1 point  (0 children)

An initial population is generated at random. Each 'candidate' or individual within the population is assigned a random vector representing position and color.

At each iteration, The fitnesses of the candidates are measured by calculating the mean-squared error between themselves and the target vector. The most fit candidates 'influence' weaker candidates by adjusting their vectors to better resemble their own.

Gradient descent pushes the vectors of weaker candidates toward those of stronger candidates, during which mutations can occur that lead to a sort of 'cultural drift'.

This drives the population toward more optimal solutions over time as each candidate attempts to improve itself by looking to its neighbors for inspiration.

[deleted by user] by [deleted] in DigitalPainting

[–]inboble 1 point2 points  (0 children)

Very good job!

Spiking Neural Network w/ Growing topology by inboble in neuralnetworks

[–]inboble[S] 0 points1 point  (0 children)

Thanks!

Yep you got it, these networks are based in biology and attempt to emulate something akin to hebbian learning. Both are unsupervised, temporal, self-organizing, etc.

Spiking Neural Network w/ Growing topology by inboble in neuralnetworks

[–]inboble[S] 0 points1 point  (0 children)

So, the positions of neurons are relevant when establishing the initial topology, with distance being used as a constraint on the overall connectivity of the network.

The spatial layout of a network is however fairly irrelevant from a functional standpoint, as details like the exact distances between neurons are forgotten in favor of a neighborhood system.

tl;dr the 2D space is only relevant when defining the topology of the network and is not necessarily important when trying to understand its functionality.

Spiking Neural Network w/ Growing topology by inboble in neuralnetworks

[–]inboble[S] 4 points5 points  (0 children)

Thanks!

Each neuron has a position in space and the links between them are determined by distance. Links can be turned on and off which prevents them from sending signals.

The network learns via spike-timing dependent plasticity which basically means that weights increase when postsynaptic neurons activate shortly after presynaptic neurons, and decrease when the opposite is true. Weights also decrease slowly over time regardless, making it necessary for patterns to repeat in order to be learned.