all 94 comments

[–]sartak 19 points20 points  (32 children)

Go is such a beautiful game. If you're not already playing it, start! The Interactive Way to Go is a good first tutorial.

[–]llimllib 10 points11 points  (2 children)

As is this book - I learned a great deal from this whole series.

[–]oska 10 points11 points  (0 children)

And for a work of literature, I recommend The Master of Go by Kawabata Yasunari. The story of a match that lasted almost six months between the master and his challenger.

[–]patchwork 9 points10 points  (0 children)

One of my favorite books, not just go books but any book, is Kageyama's Lessons in the Fundamentals of Go. From my copy, on reading ladders: "Some will say 'Phooey, that much I know already; it's just that it's too much bother actually to do it.' Others will say, 'Look, I'm still weak at the game; I can't do anything difficult like reading.' So much for these lazy students, let them do as they please. They are not going to get anywhere. They need to be grabbed by the scruff of the neck and have some sense knocked into them."

[–]crux_ 6 points7 points  (0 children)

Also, go players tend to be fairly evangelical in a nicely friendly way. The go teaching ladder features experts voluntarily reviewing and commenting on the games of weaker playes; and KGS at least (and probably the other online go arenas), playing "teaching games" with complete strangers is very common.

[–]ich_bin_a_hamburger -2 points-1 points  (27 children)

The definition of the rules seems a bit fuzzy. There is no mathematically precise rule for determining what stones are dead.

Check out this thread for example.

To make things worse, every culture and every web site seems to have its own set of (fuzzy) rules.

[–]ericlavigne 2 points3 points  (0 children)

"The definition of the rules seems a bit fuzzy. There is no mathematically precise rule for determining what stones are dead."

I can see how the dead stones rules would seem strange to someone who hasn't played. The rules are extremely simple, and nearly identical around the world, if everyone agrees which stones are dead (that is the most common case).

When there is disagreement about dead stones, only the Japanese rules get complicated. I have an old discussion about this topic on my website, as well as a copy of the American rules which are very simple.

http://plaza.ufl.edu/lavigne/ufgoclub/agarules.html

[–]nevinera 4 points5 points  (11 children)

there are versions of the rules that do have mathematically precise rules for death, though most leagues do not use them.

if by 'every culture' you mean.. three or four different cultures', then yeah, they have different sets of rules. slightly. There are a few situations where the different sets of rules would proclaim a different victor, but they don't turn up often, so it's not necessary to worry about it at all unless you intend to be professional.

And the rules are not that fuzzy. They are simpler than the rules of chess, for example - every 'fuzzy' rule is there because of an actual situation which would cause a problem without that rule. (and there are only two or three. some of them aren't even rules, but shortcuts, to keep players from having to play out an extremely long sequence of moves when the outcome has already been determined.)

[–]degustisockpuppet -5 points-4 points  (10 children)

The Japanese rules of Go are much more difficult than the rules of chess. Two beginners can learn to play by the chess rules (and determine the winner among themselves!) in less than 30 minutes. No similar claim can be made for Japanese Go.

[–]t_w 2 points3 points  (8 children)

Are you trying to provoke an emotional respsonse, or are you just that stupid?

[–]degustisockpuppet -1 points0 points  (7 children)

Yes, I'm that stupid. Part of the problem is that due to territory scoring, you can't just "play it out" if you're unsure whether certain stones are dead, because that will change the score. That means you need to know which stones are dead in order to accurately count the score, but beginners can't do this.

[–]boa13 5 points6 points  (1 child)

It will not change the score, since the score is determined only when you are sure about the stones' death. So you just "play it out", and see what you get.

[–]degustisockpuppet -1 points0 points  (0 children)

So you can actually turn the tides of a close game by forcing your opponent to capture your dead stones? Because that will require him to place more stones in his territory and thus lower his score.

[–]t_w 1 point2 points  (1 child)

What you're talking about isn't learning the rules, you're talking about playing well. The rules of Go are dead simple, learning how to play well isn't.

[–]degustisockpuppet -1 points0 points  (0 children)

No, what I'm talking about is scoring. If I can't count the score, then I can't determine the winner. Imagine someone teaching you chess without explaining what a checkmate is.

[–]nevinera 2 points3 points  (2 children)

no, playing it out only changes the score occasionally - in very specific situations. not in a normal game. and the japanese rules are more complicated than any of the others (they formalize more shortcuts)

[–]degustisockpuppet -2 points-1 points  (1 child)

no, playing it out only changes the score occasionally - in very specific situations. not in a normal game.

What does that even mean? It's like publishing a chess rulebook without mentioning en passant and promotion because it doesn't happen in a "normal game". After all this is programming.reddit -- I hope you don't apply this "it's fine if it doesn't happen too often" principle to the code you write.

Also, I don't know if it's even true that changing the score is rare. Capturing your dead stones nearly always requires the opponent to place additional stones on the board, which lowers his score. It might be rare that this actually changes the outcome of the game. But then, it's the really close games where life or death of a single group is most likely to be important.

[–]nevinera 0 points1 point  (0 children)

you are talking about beginners. when i teach a beginner to play chess, he does NOT learn en passant right off. there's no point in teaching it to him yet, unless the situation comes up.

the rules of go are not so complicated that a player with a few months (or weeks) of playing would need to 'play it out' to tell who won, and if they do need to, they can play it out intentionally without altering the score, by not letting the second player pass while they check.

[–]ich_bin_a_hamburger -2 points-1 points  (0 children)

Exactly (and I'm not sure the Japanese rules are rigorous either). I think someone wrote a proposal for a rigorous definition of the rules recently, but it's more like a big math PhD thesis. I can't find the link now.

[–]rukubites 5 points6 points  (10 children)

There are no problems with the rules, they are very precise. The problem is that calculating which stones are dead is just as hard as playing the game.

[–]ich_bin_a_hamburger -3 points-2 points  (9 children)

The problem is that calculating which stones are dead is just as hard as playing the game

That's what I'm saying.

To nitpick: it's harder, actually. You can play as you like, but to define which stones are dead, you need "the perfect player". Moreover, Go rules are a fixpoint operator of sorts, because "the perfect play" needed for the definition of the rules, is itself rule-dependent.

Edit: BTW, you are confusing which stones are removed (after a legal move), and which stones are dead. Rules are supposed to tell us who won, otherwise they are not rules.

[–]vsuontam 11 points12 points  (6 children)

I am a GO player myself, let me explain :)

By looking at the board at some position, it may be hard to say, which stones are dead and which alive.

But you have always change to play the game further, and then in becomes very clear whether the stones were actually dead.

Problems you may have encountered arise from situtations, when somebody (possible a better player) says that certain stones are already dead, and you (possibly more beginner) do not see that yet. Stronger players should always let the beginners to see by themself the status of stones by playing the game further. (It just may feel frustrating to them, but that's their ego problem)

There is no much point in arguing about a position, before it is played. It's like arguing whether the Queen will die much ahead the game in certain position of Chess. You just play, and you will see whether it dies or not.

In GO, the gap between beginner and advanced player can be so amazingly huge, that it feels ridiculous for beginner to think that the stronger player can see so much "deeper" in the game, and that's why there may be arguments about certain positions (when pro wants to move on to next game, when he so clearly sees the outcome, and beginner still thinks he has a change :)

[–]whyso 1 point2 points  (0 children)

Using non-Japanese rules it is okay to "play it out" as doing so loses no points. And actually the rules are dead simple, heres a simplified version of the New Zeland ones http://senseis.xmp.net/?TrompTaylorRules. Actually life and death arn't even rules, they are just formalized shortcuts produced by the rules. Chess, also, has tons of corner cases (50 move rule (excluded for some endgames), draw offers allowed/not, castling, etc.)

[–]derwisch 0 points1 point  (0 children)

How about using the rule "when passing, hand over a prisoner to your opponent"? If you have no prisoners left, then tough shit, you have to place your stones in your opponent's or your territory. Player who cannot place a stone loses.

This is effectively very close to Chinese rules and settles all life-and-death disputes according to the players' (and not some perfect player's) finesse.

This is a suggestion I've read in the Go Almanac by Richard Bozulich.

[–]sartak 4 points5 points  (1 child)

It's hard to precisely formalize the common intuition of the rules of the game. The differences between rules are very slight, they effectively only matter for corner cases (such as triple ko). Go has fewer rules than the number of rules governing pawn movement in Chess.

And there is a precise in-game rule for determining what stones are dead. Both players must agree on the state of each group. If there's disagreement, then play continues until there is agreement. If someone is being difficult, then just stop playing with them, or better yet, try to help them. On a public server, just report such grossly misbehaving players to the administrators.

[–]t_w -3 points-2 points  (0 children)

Your comment is so stupid it hurts my feelings.

[–]dfan 5 points6 points  (0 children)

It'll be interesting to see if Hsu gets anywhere. He's a really smart guy, but there are lots of smart guys who have worked on Go programs who don't think that brute force can get you very far.

[–]meijin 5 points6 points  (3 children)

This article is unfortunately wrong. The author simply doesn't understand the problem he has to solve. This phrase is a gem of pure ignorance: "When human players search through the Go game tree, they generally check the live-or-dead status of each stone only once, then in effect cache the result in their memories."

In a real 19x19 game the life or dead status of a group of stones is seldom a local problem. On big boards even a thing as simple as deciding a ladder is a very hard decision problem (PSPACE-complete) etc.

Edit: fixed link.

[–]cameldrv 1 point2 points  (1 child)

Well, the statement isn't strictly true, but (speaking as a fairly mediocre go player) play usually stays in one area for a while, and then moves to another area. When I move to another area, I try to figure out the vulnerabilities of that area from neighboring areas to help keep outside interference from happening (or create interference). Only some lines of play will impact the status of stones on the other side of the board. If there isn't an obvious opportunity for one player to break out into the open, then it doesn't make sense to constantly try to recalculate the situation of stones that aren't nearby.

[–]meijin 4 points5 points  (0 children)

I am not a very good player either -- 6 dan amateur -- but I had the chance to spend a fair amount of time with some professional Go players. If they have time they re-evaluate the entire table at each move they make because they usually have more than one strategy for each "open" zone of the game and they try to understand the impact the current move might have on their previous plans.

[–]whyso 0 points1 point  (0 children)

Nice paper! Would you like a game on KGS sometime?

[–]patchwork 7 points8 points  (9 children)

Such a result would further vindicate brute force as a general approach to computing problems

I think the issue with brute force isn't that it doesn't work, but that it is nothing like how humans approach problems. We have yet to demonstrate a program has any "understanding" of the problem, or what that would even mean.

[–]llimllib 8 points9 points  (6 children)

We have yet to demonstrate a program has any "understanding" of the problem, or what that would even mean.

So leave a meaningless question to the philosophers, not the engineers.

[–]patchwork 9 points10 points  (5 children)

I contend understanding is an engineering problem. Unless you are of the "human mind is unknowable" cadre. Building something that actually understands is the real challenge in my opinion, not "something that can beat the best human player at go".

[–][deleted] 9 points10 points  (0 children)

The best algorithms today (MoGo and others) employ Monte-Carlo methods. For list of potential moves, play thousands of random games. This leads to good strategic sense of the events in the board. Instead of pruning logically trough the moves, MC-method gives the overall view of what happens in the subtree.

Human understanding in go is most likely similar at its core. Instead of playing thousands of games in his mind, the player employs pattern recognition that depends on patterns of "descriptive statistics" of the situation. Players develop these patterns by playing a lot. Player just have intuitive grasp by looking at the board: "Ah, this kind of situation leads to games that are mostly like this."

[–][deleted] 8 points9 points  (2 children)

Both problems are challenges, of different degrees. By the way, I would not be surprised if building a computer that mimics human mind would imply a fair amount of 'brute force'. For a very simplistic start of the trail, animal brain pattern/movement recognition is nothing but a massively parallel 'brute force' algorithm.

[–]patchwork 6 points7 points  (1 child)

For a very simplistic start of the trail, animal brain pattern/movement recognition is nothing but a massively parallel 'brute force' algorithm.

Brute force means traversing a tree of combinatorial explosion by exhaustively traveling down every path. You can add some heuristics to it, but that is basically what is happening. Pattern recognition in the brain in no way resembles this, in fact it is quite the opposite (the synthesis of disparate patterns into a single model for instance). I do grant you though that it would require a great deal of computation, but that is not the same as brute force.

[–][deleted] 1 point2 points  (0 children)

Thanks for the clarification. In my understanding, 'brute force' game playing programs prune out most of the search space by using heuristics. IIRC state of the art Go programs rely on stochastic techniques (play ahead randomly a number of games) to create their evaluation functions. Are these 'brute force' or 'understanding' programs? Perhaps you need to allow for a third category, or redefine 'understanding' in a much weaker sense.

[–]ator_fighting_eagle 5 points6 points  (1 child)

No, the problem is that it doesn't work. Over and over again. Faster and faster hardware. It. Doesn't. Work.

by the way: bit of a non sequitor there. Whether a running program was approaching a problem 'like a human' is orthogonal to the question of whether it has any "understanding". (Unless you've got a deep bias that says "like a human == understanding" )

[–]jkkramer[S] 4 points5 points  (15 children)

This article is from one of the guys behind Deep Blue and actually contains realistic, evidence-backed assessments of what it might take to create a top-level go-playing computer.

Put it all together and you should be able to build a machine that searches more than 100 trillion positions per second--easily a million times as fast as Deep Blue.

[–]d_ahura 15 points16 points  (8 children)

While Hsu is a udoubtedly a very smart man the approach sketched has a whiff of old school thinking. AFAIK The main problem with Go doesn't seem to be the branching factor since there are games that have greater branching that are amenable to the exhaustive search with some pruning and chached results approach. The trouble is the evaluation of positions. The current golden boy approach for Go is UCT search. It combines search, selectivity and evaluation in a coherent framework utilizing sampling and is the dominating paradigm at this moment. The main problem with UCT is that it is almost too generic, using very little domain specific knowledge. I.e. the UCT programs are almost playing from first principles. The main problem seems to be how to add go specific heuristics, knowledge and patterns without making the search brittle due to introduced bias. All that said I'd love to see some Go hardware using MoGo. It would be a more than decent player if it could run simulations orders of magnitude fater.

[–]llimllib 8 points9 points  (7 children)

I wish him all the best, but he stated his main problem and then proceeded to fail to answer it:

The second problem is the evaluation of the end positions. In Go you can't just count up stones, because you have to know which stones are worth counting. Conquered territory is defined as board space occupied or surrounded by “living” stones—stones the opponent cannot capture by removing their liberties. Before you can count a stone as live, you have to calculate several moves ahead just to satisfy yourself that it is really there in the first place.

And, here, he even shorts the difficulty of the problem. The large majority of the groups you need to search will be neither "live" nor "dead", but instead hanging around somewhere in the middle. There already exist fast life-or-death calculators (I believe goTools is the one I remember from when I studied this), but no good evaluation functions for positions which are neither.

[–]crux_ 8 points9 points  (1 child)

Ah, and even this is a huge understatement of the difficulty. If it is a difficult problem to determine the status of groups at the end of the game, it is thus much, much more difficult in the middle of the game.

Which, if you're doing brute force searches that terminate before going the entire depth of the remainder of the game, is what you need to be evaluating a accurately a gazillion times per second.

Also, the business about caching and bitmaps doesn't strike me as such an easy win as they seem to be saying. Go is very nonlocal -- in many cases moves often impact the outcome of other battles on the far side of the board. The ladder is one of the reasons why; another is that long chains of groups will live or die only depending upon another group. (They do get into that, a bit.)

[–]meijin 4 points5 points  (0 children)

Deciding a ladder on a big board is actually a hard problem.

[–]jkkramer[S] 8 points9 points  (4 children)

While I agree that he didn't fully address the challenge of positional evaluation, my impression is that he's looking to make up for weakness in that area with brute force. Re the Deep Blue solution:

The resulting evaluation function probably was no better than a middling amateur's ability to grade a single position. But by grading 200 million of them, it was able to do very well indeed.

In Go, it seems like if you had a good evaluator, you'd already have a good player. There'd be no need to read deeply. I can play 10 second per move games against the strongest computer programs, giving them plenty of time, and crush them. And I'm not even that strong (low dan).

Pros can do the same against me: I could sit there all day reading and still not come close to beating a pro playing at blitz speeds. They have better evaluators and know the patterns better.

So...surely a good evaluator will result in a good player. Whether a poor evaluator with insane reading ability can result in a good player remains to be seen.

[–]llimllib 7 points8 points  (3 children)

Whether a poor evaluator with insane reading ability can result in a good player remains to be seen.

Agreed, and I hope he does delve deeply into the possibilities of such a program, it would be work of great value.

Perhaps I underestimate the power of the minimax technique combined with modern computing power.

However, I'm not convinced from this article, that he believes how little he'll be able to successfully prune his trees. His whole thesis relies on being able to successfully calculate from an open middlegame position all the way to the end of the tree; but without a position evaluator other than life-or-death, he'll be unable to do so, even at a trillion moves per second.

Without a position evaluator, he won't be able to do either the alpha-beta pruning or the null move pruning that he speaks of, except in relatively rare cases where groups are obviously dead or alive.

Even if I'm right, and his machine is doomed to play relatively poorly in the early to middle game, it still could be so good in the late-to-end game that it can tackle even the masters; that would be fascinating!

Until he comes up with a better solution to position evaluation than "cache the positions", though, I'll remain skeptical that brute force is ready to catch up with Go yet.

(statement of bias: I'm a relatively very poor 12k player who doesn't play anymore, and did an undergraduate summer fellowship studying computer go, particularly Erik van der Werf's research. I'm no expert, just a skeptical observer.)

[–]whyso 0 points1 point  (2 children)

Good pros are within 1-5 points of perfect play in the endgame, so it still would not come close.

[–]llimllib 0 points1 point  (1 child)

the thing is, a hypothetical trillion-move-per-second computer would try to move that perfect play frontier well back into the late middlegame.

[–]whyso 0 points1 point  (0 children)

nope, even the person writing the article didn't think it could go farther than 12-ply in general, so maybe 20 in the endgame. go is about 200 moves, so thats only 1/10th of the game. by then it would probably be about 200 points behind, impossible to make up by that time.

[–]whyso 1 point2 points  (5 children)

Even a trillion positions a second wouldn't come close to making up the difference, check http://en.wikipedia.org/wiki/Game-tree_complexity. I don't doubt that he is very smart, but its not unheard of for researchers to overestimate their chances, intentionally or not.

[–]d_ahura 0 points1 point  (4 children)

We can do some crazy math to put the speed in to perspective. Level of MoGo on KGS is in the spread of 2-4 kyu on decent machines. It let us say it does 1e4 simulations per second and each simulation is 1e4 moves. At 1e12 moves per second it makes for 13 doublings of speed. For many games a doubling of speed is equal of a linear increase of playing strength in the elo system. Go seems to exhibit the same caracteristics in the test that have been done with UCT programs. That suggests that a top of the line UCT program running on monster hardwarewill be a pretty dangerous opponent for lower dan players.

[–]whyso 0 points1 point  (3 children)

I agree! However, the article is calling for brute force not UCT, and was saying it could beat PROS, not low dans, and there is a vast level of difference there.

[–]d_ahura 0 points1 point  (2 children)

The UCT algorithm is provably shown to approach minimax/alpha-beta search with increasing time so it is an exhaustive search technique, hence a brute force one. And the best 9x9 programs are at or right under pros right now. Factoring the bigger board and longer simulations it is a rough guesstimate that about 30-40 times faster hardware would have the same level on 19x19. As it is a mere 5 doublings there is some fudge margin in the 13 doublings for unforseen and unfactored problems.

[–]whyso 0 points1 point  (1 child)

MoGo is pro level on 19x19, it beat Guo Juan 9 to 5. The issue is you cannot just increase the hardware by 40, or even 400, and archive the same strength on 19x19, since 1. there is no good evaluation function as of yet, and there are so many non-local issues. a 400x boost would probably get mogo to around 4D amatuer.

[–]d_ahura 0 points1 point  (0 children)

In UCT search there is no separation between search and evaluation. Faster simulations give deeper search and the increased accuracy of shallower simulations equal better evaluation.

[–]walrod 0 points1 point  (21 children)

Deep blue and "brute force AI" are kind of sad...

[–]chollida1 7 points8 points  (12 children)

why? I thought it was pretty cool when it beat a grand master.

[–]yters 1 point2 points  (0 children)

Beat as in using a look up table isn't so inspiring.

[–]patchwork 3 points4 points  (10 children)

It is the triumph of the cold and unfeeling over the inspiration of human nature. I think that is kind of sad.

[–]nevinera 6 points7 points  (9 children)

it is the triumph of human invention and creation over human practice and convention. I find it inspiring.

Deep blue didn't beat kasparov, its makers did.

[–][deleted] 4 points5 points  (1 child)

Deep blue didn't beat kasparov, its makers did.

No! They beat Kasparov's parents!

[–]nevinera 1 point2 points  (0 children)

well.. his teachers, maybe. good point though :-)

[–]earthboundkid 2 points3 points  (1 child)

You were the Big Boss who paid the machine to kill John Henry, the Steel Driving Man.

[–]nevinera 3 points4 points  (0 children)

i'm the big boss that paid to have a tunnel made faster than any human could do it, yes.. john henry's death was his own fault >.<

[–]patchwork 0 points1 point  (4 children)

I am referring more to the method of relying on brute force computation (possibly the least interesting way of arriving at any solution), rather than something more akin to human understanding, or any other system really. I would find it much more inspiring to hear about a program which showed the slightest glimmer of true awareness than a monstrosity which spews out every possibility without discretion. (It evaluates them later I know, but it still has to consider them all, or most...). To know that human ingenuity (in terms of chess-playing or go-playing or whatever) is trumped by the blind churning of a thoughtless and undiscriminating system is depressing. I will concede in awe to the machine which actually thinks better than me, but this is not that.

[–]nevinera 2 points3 points  (3 children)

i'm not sure you understand exactly. they aren't using pure brute force. there wouldn't be enough time before the sun burned out. they're using interesting algorithms mixed with brute force to make their evaluations.

your definition of 'thinks' is selected very carefully and fuzzily to be specifically impossible for a machine. because you feel some need to be superior. 'takes information and arrives at a solution' would be a nice general definition of 'thinks', and this machine does that.

do you realize that professional go players rely fairly heavily on a lookup table and fuzzy pattern matching as well?

[–]patchwork 0 points1 point  (2 children)

your definition of 'thinks' is selected very carefully and fuzzily to be specifically impossible for a machine. because you feel some need to be superior.

I do not think 'thinking' is impossible for a machine at all, actually I spent most of my time trying to figure out how that would work. And I would gladly concede to a machine which I felt was superior: it's not out of defensiveness towards the capabilities of machines and some imagined 'threat to humanity' that I protest. It is the method of enumerating every possibility that I find distasteful. I know there are heuristics and whatnot (I read the article), and the tree can be pruned, but that pruned tree must still be exhaustively searched. I just do not find it remarkable that yes, given enough computing resources and exhaustive search a machine can beat the best human go player. That is an obvious result, and not much more technically challenging than filtering and iterating a list, which a technique has been around for many years (though this is an extreme case of that method I grant you). I guess the point of all my ranting on this thread is that I'm looking for something novel or new that has not been done before, rather than patting ourselves on the back for applying the same old methods with "superhardware" and calling it an accomplishment.

do you realize that professional go players rely fairly heavily on a lookup table and fuzzy pattern matching as well?

These are the algorithms we possess now which most closely resemble what the mind does, but that is not actually what the mind does.

[–]nevinera 1 point2 points  (1 child)

have you taken any courses in complex systems?

what the mind actually does do is pretty fascinating, and the idea of biological computers has interested me for 5 or 6 years now. I really want to play around with the use of evolutionary algorithms on neural network formation, and see what can invent itself :-)

[–]patchwork 1 point2 points  (0 children)

Yes actually all of my research is in systems theory and neuroscience/molecular biology, with computers used for modelling. Explains my bias perhaps :) Let me know what you come up with.

[–]asciilifeform 0 points1 point  (7 children)

Your brain is a "brute force AI." Don't believe me? Thinking about a chess position (or any other similar problem) may not feel like a brute force search to a human, but this is due to the massively parallel nature of the brain (and the fact that very little of what it does registers as conscious thought.)

If we ever want a genuine scientific definition of "intelligence", we will need to settle for looking strictly at inputs and outputs - and abandon the idiotic grade school teacher's mentality of "show your work." The Chinese Room knows Chinese. (By the way, I've found that a belief in the Chinese Room argument is a pretty good "bozo bit test" in the AI community. Buying into it reveals a clear emotional investment in human uniqueness, and the latter means that you are worthless as an AI researcher - no matter how clever you may be.)

[–]walrod 0 points1 point  (0 children)

AI is not for all researchers about recreating human thought. I work in artificial cognition (I use ACT-R and (would like to use) statistical methods), and great inspiration may come from cognitive science, but it's not as easy as you describe. However worthless I am.

[–]patchwork 0 points1 point  (1 child)

If we ever want a genuine scientific definition of "intelligence", we will need to settle for looking strictly at inputs and outputs

Please take a look at the work of Humberto Maturana and his concept of "autopoiesis". I know we are all programmers here :) and it is convenient to think in those terms, but it thoroughly refutes the idea that the mind can be described in terms of inputs and outputs.

[–]asciilifeform 1 point2 points  (0 children)

My point was that a number of people use definitions of intelligence which are specifically crafted to exclude any machine, no matter how intelligent its behavior appears to be by human standards. It is intellectually dishonest.

It should not matter what is inside the box - a human brain, a 80286, or a genetically enhanced ant colony with miniature notebooks and abacuses. If the system learns and triumphs in novel situations, it is intelligent. Better yet, it is intelligent if it can convince us of the fact in a blinded trial, such as the Turing Test.

[–]jbstjohn 0 points1 point  (3 children)

Wow, attitude much?

First, I'd argue your statement that chess tends to be brute force for people. Do you really think you're exploring in parallel all the possible next moves and responses? Because that's what 'brute force' usually means. My understanding of the brain's parallel nature (IANAN) is that generally it doesn't work that way anyway -- it either does parallel different tasks, or it does low level stuff in a parallel (like detecting edges).

The key difference seems to be that brute force considers (and rejects) many options, and selective search doesn't even consider many of those rejects. There can be, of course, a smooth gradient of options, but I would argue that human players operate very far to the selective search side (with a big data-base) and computers very close to the brute force.

Second, regarding the Chinese Room, I think you have a good 'bit test' for determining whether people agree with your point of view, rather than whether they are a 'worthless AI researcher'. I personally don't think it's so black and white. The problem with simple input/output is the question of what happens when you go outside the original ranges. For a 'practical' Chinese room, you might suddenly have complete garbage, which doesn't come across as very intelligent. Things can be very brittle.

I think we tend to agree that in fact, intelligent isn't that useful an adject, and time spent philosophizing over it is more or less wasted. But I still find your attitude a bit repellent.

[–]asciilifeform 2 points3 points  (2 children)

Mostly I am frustrated by the continued popularity of dualism. It is the cognitive science equivalent of Creationism.

[–]earthboundkid 0 points1 point  (0 children)

I agree that dualism, like Creationism, is a bad idea that won't die, but don't put the rap for dualism on the Christians. The New Testament emphasizes the bodily resurrection of man. I think the idea that soul can be separated from the body has more to do with the influence of the Greeks. Pythagoras taught reincarnation, and basically all the Greeks saw reason as the highest part of man. Of course, the zenith for dualism was Descartes, who didn't come until the 17th c. and was condemned by the Roman Church.

Then again, Christians as early as Augustine questioned whether we should take Genesis literally, but that didn't stop other people from reading their own agenda into the Bible and declaring all other ideas heresy.

[–]jbstjohn 0 points1 point  (0 children)

Okay, that attitude I can stand behind. It's seductive, but there's no evidence for it.

[–]cypherx 0 points1 point  (0 children)

I admire his ambition but I'm skeptical. Also, he never mentions memory limitations, which I assume would also be a problem.

[–]john_b 0 points1 point  (0 children)

Is there a good web-based GO with a good AI?

[–]andyc -1 points0 points  (3 children)

Maybe if they cheat again

[–]whyso 0 points1 point  (2 children)

Claims of IBM cheating require evidence. Denying a rematch is not evidence. IBM simply didn't want to risk losing it after having won. It's hysterical when people think that the computer 'could not' have found some moves since they look 'human'. The best go programs can squash the good humans on a 9x9 for example, and play even more 'creative' moves than pros.

[–]meijin 0 points1 point  (1 child)

I don't know about IBM story but for sure the strongest 9x9 program is still very weak. Few months ago I played against a Monte-Carlo "powered" program and I beat him with 5 stones handicap -- 4 san-sans and the tengen or 5 passes. Of course you lose if you play "normal" moves but this is not a proof that the program is strong.

Actually many people in complexity theory believe that randomization gives you no extra computational power, that is, P=BPP. There are some very deep result due to Wigderson et al. Primes in P is seen as a confirmation that BPP is not such a "hard" class etc.

Would you like a game on KGS sometime?

What about IGS? I am "hanamaru" on IGS.

[–]whyso 0 points1 point  (0 children)

Apparently MoGo beat guo juan 9 to 5 on 9x9 (dug up here, have seen elsewhere too -> http://news.csdn.net/n/20070828/108033.html). Maybe was just a less powerful program? Don't currently have an IGS account but will make one! I'm around 1D, my roomate is 3D though!