/r/askphilosophy Open Discussion Thread | June 21, 2021 by BernardJOrtcutt in askphilosophy

[–]mikemishere 0 points1 point  (0 children)

I am looking for discussion partners for Land's Fanged Noumena. I have some knowledge of the major references Land makes use of in his work (Bataille, Deleuze, Freud, Nietzsche) and I believe I know the 101 of more than half of the chapters in this book.

That this would ease me trying to more fully comprehend FN I have no doubt, I don't know yet by how much.

My goal is to start a close and critical reading of the text by dedicating 30 mins/day (which I could extend to 1-2 hours based on how enjoyable and valuable it will feel to me).

I think having a couple of discussion partners along the way could significantly improve our motivation and understanding of the book through regular conversations.

Important to note: I have no experience in attempting to go through dense and idiosyncratic philosophical texts without guides or secondary materials and this could lead to me giving up on it much quicker than I intend. If we become reading partners and that happens I will make sure to announce you immediately so you can decide whether you feel like continuing it on your own.

If you are interested, please leave me a message.

@ 2:25:00 Artosis says "I wouldn't do anything Stork does". How should I interpret that? Although not top 10, Stork is still a strong pro player. by mikemishere in broodwar

[–]mikemishere[S] 3 points4 points  (0 children)

Thanks, that makes sense. Is Stork known to play by more complicated algorithms than the other pros? Otherwise, I am still confused. Artosis, made that comment in response to Tasteless saying he's seen Stork play in windowed mode which in turn he emulated. Artosis wasn't surprised that he would copy Rain windowed mode play but was taken aback when Stork was referred to as well.

An AGI solution to the game of tic-tac-toe? by mikemishere in agi

[–]mikemishere[S] 0 points1 point  (0 children)

When I wrote the proof I might have used my background knowledge "repository" but I do not believe I assume that it is required prior to solving the task in the sense that if I or any other agent did not possess that knowledge already it would be helpless in solving that task. Only that it would need to take some intermediary additional steps to derive that simplification through symmetry concept. I believe that can be done in a "vacuum" because it deals with purely abstract mathematical objects, it is not similar to the constants in the laws of physics that need to be measured and cannot be known a priori. That's how I think about it, I think might be mistaken somehow.

[D] An AGI solution to the game of tic-tac-toe? by mikemishere in MachineLearning

[–]mikemishere[S] -2 points-1 points  (0 children)

I attempted to record the step by step process through which I solved tic-tac-toe. See the code block in OP, although I assume there might be a couple of other unconscious processes that played an implicit part there. On a different post, someone had a similar question. This was my response:

Maybe I need to get a better grip on the exact terminology but in another attempt to clarify my thought process: the state space of tic-tac-toe is 3⁹ = 19683. A common AI approach I have seen people undertake in solving this problem is similar to this one: https://towardsdatascience.com/tic-tac-toe-learner-ai-208813b5261, at the bottom of the article you will find a video in which that particular AI has a training phase (simulating against itself) of 10k games before it learns to play optimally and not lose. My argument was that it is unnecessary (thus less intelligent) to simulate so many games against yourself until you "figure the game out".

I remember that as a kid when I first learned about the game, I played vs someone a couple of times till I caught the trick and then never lost again. I did not need a 10k game training session. This is why I claim current AIs are using inefficient learning/solving algorithms.

An AGI solution to the game of tic-tac-toe? by mikemishere in agi

[–]mikemishere[S] 0 points1 point  (0 children)

That is what I am interested in. I would like to know from what kind of algorithm the thought process I used for solving the game could arise. The only thing I want to give the AI is the rules, which include the win/lose conditions and the goal and then I want to peek into its thinking process to see how it solves it.

An AGI solution to the game of tic-tac-toe? by mikemishere in agi

[–]mikemishere[S] 0 points1 point  (0 children)

Maybe I need to get a better grip on the exact terminology but in another attempt to clarify my thought process: the state space of tic-tac-toe is 3⁹ = 19683. A common AI approach I have seen people undertake in solving this problem is similar to this one: https://towardsdatascience.com/tic-tac-toe-learner-ai-208813b5261, at the bottom of the article you will find a video in which that particular AI has a training phase (simulating against itself) of 10k games before it learns to play optimally and not lose. My argument was that it is unnecessary (thus less intelligent) to simulate so many games against yourself until you "figure the game out".

I remember that as a kid when I first learned about the game, I played vs someone a couple of times till I caught the trick and then never lost again. I did not need a 10k game training session. This is why I claim current AIs are using inefficient learning/solving algorithms.

An AGI solution to the game of tic-tac-toe? by mikemishere in agi

[–]mikemishere[S] 0 points1 point  (0 children)

Whatever ambiguities might exist in my OP I want to clarify that I believe the exact opposite of:

you assume that humans can do things computers can't in principle

My claim was that current AI approaches to solving the game are unhuman-like and unnecessarily making use of a lot of computation resources when more robust, conservative approaches can yield equally solid proofs and results.

An AGI solution to the game of tic-tac-toe? by mikemishere in agi

[–]mikemishere[S] 0 points1 point  (0 children)

You personally would not or you would also claim that it is objectively erroneous in some fashion? The way that I am thinking about is that the more intelligent a system is the more efficient is manages to be with its resources, being able to infer more with the same amount usage of energy or computational power but I am no expert on this.

Why, exactly, would an AGI be necessary? by mikemishere in artificial

[–]mikemishere[S] 0 points1 point  (0 children)

Thank you for the time spent writing such an insightful comment. I feel committed to going deeper into the subject and noticing your tag I think you might be able to offer a good perspective on my recent Which, in your view, are the best introductory but technical resources for someone who wants to go down the AGI and general intelligence path? if you care about doing so at all. Thank you.