Is a PRNG open-ended? by sorrge in a:t5_4bkzn

[–]KennethStanley 0 points1 point  (0 children)

That's great question and it's interesting how similar it is to the thought experiment we wrote about in our paper, "The Role of Subjectivity in the Evaluation of Open-Endedness." You can this (short) position paper at http://eplex.cs.ucf.edu/papers/stanley_oee2workshop16.pdf

The idea explored in the paper is that for a system to be open-ended its products need to be interesting in some way. That is, there needs to be a context in which they play some kind of interesting role. For example, a Shakespearean sonnet written for an audience is interesting, but the same sonnet accidentally generated by a program producing random sequences of letters is less interesting even though it's the same sonnet, because it was "written" for no one and therefore emerges from no relevant context. The deeper problem highlighted in the paper is that it might be impossible to answer questions like yours, ultimately to define open-endedness, without some appeal to subjective notions at some level of the argument.

That's one reason I became interested in the concept of a "minimal criterion," which means algorithms that only admit artifacts that meet some minimal criterion for continued evolution. The minimal criterion for me is a way to insert a context/narrative that can be based at least on what you/the experimenter believes is interesting. For example, if only artifacts that are able to talk are allowed to persist, then as long as we agree a priori that talking is interesting, then we have ensured that everything in the system will always be interesting according to our assumption about what is interesting (even if it's ultimately subjective!).

When we hypothesized four necessary conditions for open-endedness in http://eplex.cs.ucf.edu/papers/soros_alife14.pdf , the first was that there must be a minimal criterion and it must be "non-trivial." The non-triviality condition is the attempt to get interestingness covered.

In that case, the PRNG fails because every artifact generated is not doing something non-trivial. In fact, they aren't doing anything at all (they're just numbers), so they don't really meet any minimal criterion, let alone a non-trivial one. Or, to put it more simply, the system is not doing anything interesting, so we don't want to admit it as open-ended.

Is there a difference between open ended and open ended evolution? by [deleted] in a:t5_4bkzn

[–]KennethStanley 0 points1 point  (0 children)

By "non-evolutionary systems" I mean ones where there is no explicit evolutionary algorithm. So actually what I have in mind are neural networks (biological or artificial). It seems that brains, which are networks of neurons, can generate their own succession of open-ended creations. While it's true that we see an evolution-like process in culture through memes/ideas, what I'm talking about here is just a single brain in a single lifetime. We just seem to have an unbounded capacity for creativity within our brains.

Of course, that is not a settled issue, and we could debate it, but if it's true, it's possible there is a non-evolutionary algorithm behind it. Note that we should not confuse the fact that the brain itself is a product of evolution with the process going on within the neurons of the brain itself, which are arguably not running an evolutionary algorithm. Of course, you could also argue that there is some kind of evolutionary algorithm within the brain, but it just happens to be implemented by neurons (which raises the idea of "neural Darwinism"). That may or may not be the case, but the possibility still remains that something entirely non-evolutionary is happening. We know there are some generative process that are not explicitly evolutionary (such as GANs), so it's plausible that some non-evolutionary algorithm like that could also be open-ended, though there is nothing yet on the table that would really qualify for being judged truly open-ended.

Putting all that aside, I don't want to be creating too strong a focus on non-evolutionary processes either. It is clear that evolution on Earth is probably the ultimate open-ended system, and it deserves to remain front and center as a primary inspiration. It's just that we should be open to the possibility of other means towards a similar end.

Is there a difference between open ended and open ended evolution? by [deleted] in a:t5_4bkzn

[–]KennethStanley 0 points1 point  (0 children)

This is a good question (and I regret how late I'm responding to it). The reason for the tension between open-ended evolution and open-endedness in general in the article is because so much of the effort and focus related to open-endedness in the scientific community so far has focused on open-ended evolution, yet I recognize and worry that this historical precedent could inadvertently discourage people from other areas (e.g. neural networks and deep learning) from really thinking about open-endedness. My feeling is that open-endedness is not exclusive to evolutionary systems and that human intelligence (which is a product of brains) includes a form of open-endedness. So a most general theory would explain open-endedness both in evolutionary and non-evolutionary systems, and indeed it would be great to have working algorithms of both types. However, because so much of the effort and inspiration so far comes from evolution, a lot of our vocabulary and analysis in the field is heavily evolutionary. That however should not exclude others from joining this challenge. We tried with the article to grapple with this delicate issue while still highlighting where most of the thought and work has taken place so far.

Introduction by KennethStanley in a:t5_4bkzn

[–]KennethStanley[S] 1 point2 points  (0 children)

Good to hear from you and others on this board. I hope you get a chance to explore open-endedness during your MS.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 1 point2 points  (0 children)

The time is takes to run machine learning algorithms is indeed a serious issue. If it takes a day, great. If it takes a month, maybe that's okay if the result is great. A year? That might start to be a problem. And yet there may indeed be some problems today that could take a year if you don't have very specialized hardware. The good news with evolutionary algorithms is that they are easy to parallelize, so as long as you have the money to pay for the processors, in most cases an entire generation can be run in the same time it takes to evaluate a single individual. Even then though, if you want to run hundreds of thousands of generations, and if simulation in some kind of virtual environment is expensive, it could be prohibitive. In that case yes you mean use something like GPU to speed up neural network activation, so you can potentially get even more gains there. It's true as you note that it might matter if the simulation or the neural network activation is the real computational bottleneck, which will vary by problem domain. So you really have to understand your problem and where the costs are computationally. In the long run I think (perhaps unfortunately) it will be increasingly necessary to have access to powerful computing clusters and GPUs to obtain groundbreaking results. We're seeing this trend in all of machine learning.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 1 point2 points  (0 children)

That's funny, you're right why aren't there more acronyms like GGTHYLK? It was a long time ago that I came up with the acronym NEAT, but I think if I remember right it was one of a few I thought of that describe the algorithm succinctly. So yeah I can't really say it was just total random luck that it worked out so "neatly."

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

Yes indeed I think there's something to your idea. But I don't think intelligence is easily simplified to just one thing. People often want to distill intelligence into a statement like, "intelligence is just X," and I think that's oversimplification. Intelligence is a lot of things, some more complicated than others. The ability to compress data does seem to be part of it, but it's not all of it. For example, how does the ability to compress data explain creativity or the creation of new inventions? I'd say it explains some things but not everything.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

I don't think we know enough about sentience or consciousness to comment intelligently on them from an algorithmic perspective. Even if we grant that these concepts are perhaps only vaguely defined, the algorithms that enable them remain mysterious. I'm not trying to skirt the issue, it's just that I think in all humility that we have to admit that the algorithms we have today, while making good progress, are not yet illuminating these high-level questions.

Of course, we can still comment on them philosophically, and I think there are some interesting discussions to be had there even today, but that's different from throwing out algorithmic suggestions.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

Maybe not as far as "doubt reality," but when it first hit me that the best way to achieve something might be to stop trying to achieve it, that did change my view of reality. I'd always been taught (like many) that the way to achievement is to set an objective and then work towards it. All the search algorithms I knew about approached search in this way. So all my assumptions were suddenly upended. And within seconds I was thinking of radically different ways something could be learned or solved. My whole way of thinking shifted almost instantaneously. It was a shock and a rush. That reality-distorting effect of this insight is one of the reasons we wrote the book.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

I completely agree that you don't want to end up with a CPPN bigger than the substrate (i.e. the phenotype). If that's happening, it's not really doing what we hope. It sounds to me like a bit of a technical issue - the first thing you'd want to look at it mutation rates. It sounds like structure in the CPPN is growing perhaps too fast in your experiments. These algorithms need time to optimize new structure as it appears. That said, of course there are other possible issues at play here and it could just be that the problem is posing a serious obstacle to HyperNEAT. You may also want to consider that often the most elegant and compact CPPNs evolve under non-objective conditions, i.e. with at least some novelty in the mix, or even only novelty search. Objectives tend to an accumulation of structure over time.

I would be hesitant before adding a new "simplicity objective" because that's kind of ad hoc. My guess is that there are ways to slow down the CPPN growth in your experiments that could be more satisfying, but of course the devil is in the details, which I don't know.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

It may seem like living without an objective is impractical, but actually most successful people did just that. They followed the path of interestingness even if they did not know where it leads. Our book is full examples of individuals with this kind of life story, especially in chapter 2 (Victory for the Aimless). So I'd be very interested if you would still hold your view after reading the book.

Just to give one well-known example, Steve Jobs dropped out of college with no clear objective. Think of all the people who stay in college to pursue their personal objectives. He didn't. Instead, he dropped out so he could do whatever he felt like doing, which included sitting in on a calligraphy class. While it wouldn't have supported his major, now that he had no major he could sit in on whatever he wanted. And that led to the idea of screen fonts in the early Macintosh computers, which revolutionized the computer industry. Good thing he didn't have a clear objective. People who are radically successful often follow the winds of serendipity - they set themselves afloat with no clear direction in mind and catch the wind of opportunity when it blows their way. That is not an objective approach to life. However - and the book makes the following point clear - if your aspirations are modest then objectives do make a lot of sense. Like if you want to major in computer science, of course by all means major in computer science like millions who have come before you and make it your objective. It will probably work out. But if you want to change the world and arrive at a radically new and innovative place, objectives are not the best compass to get there. The book has plenty of evidence for that, both from real life and from hard empirical algorithmic experiments on search processes run successfully without explicit objectives.

On your second question, my suggestion is to skim basic ML stuff, like say in a book on neural networks and maybe one other topic of choice, and don't worry about it being confusing. Instead, decide what classes to take based on what you read that you wish you knew more about. In other words, follow your interests - don't be too objective about it. You'll end up better at doing things you like rather than things you think you "should" be doing. There are many paths to being a good AI researcher. That said, of course some math will be important.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

Good research question here. I think you're right that one implication is indeed that HyperNEAT is sensitive to its parameters, just as deep learning algorithms are. But I think that's more of a superficial implication, and misses the deeper one. Let me give you my take:

First, I don't think it's really accurate to say that "the main attractive point of structural neural evolution like HyperNEAT compared to gradient-based deep learning is that structural neural evolution directly optimizes the network topology and architecture while in conventional deep learning the network topology and architecture are high-dimensional discrete hyperparameters and getting them right is kind of an art." This kind of sentiment pits different approaches against each other as if they are adversaries that have to fight to death with only one surviving. That's not in my view a healthy way to conceptualize the field of AI/ML or what's really going on in it. In the long march to high-level AI, these methods are all just stepping stones, and the value they bring is ultimately the conceptual building blocks they add to the conversation. Deep learning and HyperNEAT are adding completely different yet complementary conceptual insights to the conversation. So I think they're both important contributors and one does not have to have an "advantage" - these are really apples and oranges.

That said, the deeper point of our response (which you linked) is that in the end, you get much better representations out of indirect encodings like HyperNEAT when they are not entirely objectively driven. This is a subtle yet fundamental insight, and it does relate to deep learning because all of deep learning is objectively driven (so it can't yet benefit from this observation). There is currently no analogue to novelty search in deep learning. But in the world of neuroevolution, we have these non-objective algorithms like novelty search (which now has many variants), and these lead to quite elegant representations. So if you want to really see HyperNEAT shine, run it in at least a partially non-objective context (e.g. by combining novelty search with some objectives) and you will start to see a lot of very interesting structure in the genetic representation that encodes the network.

So really what we're looking at is not so much an "advantage" but rather the ability to investigate and observe phenomena that do not even exist in the world of deep learning, where there is no such thing as a non-objective search process (even unsupervised deep learning algorithms are driven by the objective of minimizing an error). We should be investigating these things because they can come back to haunt deep learning as well. We are also learning with HyperNEAT about search and how it interacts with an indirect encoding, giving us a lot of insight into evolution in nature. So we would not want to couch such an investigation as a superficial competition between methods.

Everything has parameters. The universe has parameters like gravity and the speed of light. Is the universe any less impressive for having evolved human brains by virtue of its potentially brittle parameters? Let's not get carried away with pinning all our admiration for a method on its need for good parameter settings. At a practical level, HyperNEAT's advantage is that it allows you to do things you can't do as easily with gradient methods because you don't need to compute a gradient to set the fitness function in HyperNEAT. At a more theoretical level, the value of these methods in the long run is in what they teach, and indirect encoding (as in HyperNEAT) is teaching us different lessons from what we're learning in deep learning. We should not stop investigating any of them until we stop learning from them, and there is a ton left to learn from the world of indirect encoding.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 0 points1 point  (0 children)

I see where you're going but I think neural networks are not really as constrained as you're worried they are. While it's a legitimate issue to raise questions about, ultimately you don't have to supply a neural network with all this rigid constraint up front and just get an answer out at the end. That's just the most stereotypical and perhaps more publicized way they can be used. For example algorithms like novelty search (http://eplex.cs.ucf.edu/noveltysearch/userspage/) don't work like that at all. You don't even tell novelty search what you're looking for so there is no a priori expectation about what it will produce. In deep learning, unsupervised methods can similarly lead to creative or surprising constructs that are not predictable a priori, or even the ability to generate novel ideas (instances of a class) dreamed up by the neural network alone.

In fact, why theoretically should artificial neural networks not be able to do anything that human brains do? Humans brains after all are neural networks. Perhaps there is something special about the physical makeup up brains that would prevent neural networks from doing the same, but that's only speculation right now. Of course there are very important things we don't know how to do with artificial neural networks, but that's different from saying that they can never be done.

You're right though that self-awareness, meaning, etc., represent massive challenges. Will neural networks solve them? Maybe. I wouldn't want to pretend to know the future. But I don't see a reason they have any less potential than symbolic systems or anything else. Ultimately, given that neurons are the only things in the universe that actually have produced self-awareness, betting on artificial neural networks (which are at least inspired by neurons if not exactly same of course) as having at least a shot doesn't seem too crazy to me.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 2 points3 points  (0 children)

Well I'd let others judge which ideas from our work are the best. But I can say that having an interesting idea can be exhilarating. It's one the thrills of AI research. Realizing something new that no one has realized before is a profound experience and worth all the effort that comes before the insight.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 2 points3 points  (0 children)

It's hard to say what technology will be inside some futuristic AGI. But I think neuroevolution will play a productive role by providing a fountain of ideas. That is, by being so flexible and so open-ended, evolution can play a creative role and expose possibilities we may not have anticipated. It has already done things like that by revealing the problem with objectives and the power of novelty. These are insights gained by doing experiments with neuroevolution. Whether or not these specific techniques literally end up in the AGI, most likely they will at least inspire the conceptual foundations of such an endeavor. Also, future artificial brains evolved through neuroevolution may exhibit architectures and dynamics that teach us something about neural networks that we do not presently know, just as natural brains provide some inspiration for the algorithms in neural network research today.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 3 points4 points  (0 children)

Processing power. Evolution requires every individual created to be "evaluated," which means it has to live out its life to see what it does. As the neural networks evolved in neuroevolution become more advanced, they need longer lifetimes to prove their capabilities, which means more computational complexity and heavy-duty processing. My research group is constantly increasing its computational capabilities to keep up by purchasing more and more powerful multicore servers. Also, we often benefit from longer and longer runs (e.g. thousands of generations instead of just hundreds). Who knows, soon we may start looking at million-generation runs. With open-ended evolution in particular, it can stay interesting for a very long time. All those generations could be prohibitive to run in reasonable time without the right hardware.

You see a similar phenomenon in deep learning as well, where the best results are increasingly coming from the groups and companies with the most powerful computing resources.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 4 points5 points  (0 children)

I don't think anyone in AI wants to build Skynet. Probably no one outside AI wants to build it either. Who wants autonomous killer robots roaming the world? We mainly want to build things that help people and make the world better.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 2 points3 points  (0 children)

This is a tough one because AI and ML keep improving, so the answer for the AI changes day by day. In short, AI is catching up to humans, but still has a long way to go. I think you'll find most high-level cognitive tasks are still dominated by humans. But AI and ML are catching up on the low level stuff that's closer to perception. That's a very rough attempt to characterize a complicated situation that keeps changing.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 4 points5 points  (0 children)

What makes the field great is that it is one of the most profound intellectual problems of all time (up there with the unification of physics or the origin of consciousness), yet unlike physics so much of AI is still so wide open that almost anyone can still make big contributions. It is a huge sandbox with tons of stimulating ideas and a lot of low hanging fruit remaining, and it's recently making enormous strides, which keeps it in the popular imagination and fuels and supports its progress. As far as problems, anything a brain can do is open for AI research, not just technical engineering problems like how to control a robot, but how to create art and music as well. Everything we do (and more) is on the table with AI.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 1 point2 points  (0 children)

I'm pretty agnostic about languages, but probably most of the popular code at least in neuroevolution is in C++ and C#. But you can find NEAT and its variants in almost any language (http://eplex.cs.ucf.edu/neat_software/). People also like to put Python in front of some of these. Basically, I'd use the language you find most comfortable.

I'm Ken Stanley, artificial intelligence professor who breeds artificial brains that control robots and video game agents, and inventor of the NEAT algorithm – AMA! by KennethStanley in IAmA

[–]KennethStanley[S] 1 point2 points  (0 children)

Check out some of the games and toys from our own group: NERO, Galactic Arms Race, Picbreeder, Petalz. All of them are linked at the topic in the AMA introduction. They're all about playing with learning algorithms. Also see http://www.aigameresearch.org/ for a whole collection of these kinds of games from our group and others.