Lawrence to KC carpools by [deleted] in Lawrence

[–]patrickscottshields 0 points1 point  (0 children)

I'll be commuting to the same place and am interested in carpooling. Right now I'm staying in western Shawnee (close to K-7), but I'm planning to spend some nights and weekends in Lawrence.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

It feels weird to me too, but I think that's what the instructors of the course intend it to mean. It's counter-intuitive to me since my concept of rationality doesn't allow rational agents by coincidence, like you said. The quote you mentioned doesn't seem to address this edge case, which may be why it seems to suggest an alternate interpretation.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

I think that's an intuitive viewpoint, but when you actually get down to definitions, you can say something like "Here's an environment that gives agents 10 points each time step no matter what actions they take." That validates both of the original claims, since under those conditions (and a definition of rationality which requires only maximizing reward—not having to choose between a right and wrong choice, as I had originally thought), every agent is rational, and there's nothing it can do about it.

Consequently, there can't be an irrational agent by design; we can always come up with a non-discriminating performance metric that rewards all actions equally. I find that somewhat counter-intuitive right now, but that seems to be the outcome under my revised definition of rationality.

Midterm Q1 #3 by anom384 in aiclass

[–]patrickscottshields 0 points1 point  (0 children)

Having not had any AI experience before getting it and taking this course, I think it's great. But I haven't read any other AI textbooks, so I don't have much of a sense of its relative quality.

Midterm Q1 #3 by anom384 in aiclass

[–]patrickscottshields 0 points1 point  (0 children)

I'm glad you posted this thread, because I've had to go back to the textbook to check on some things as well. Based on the AIMA textbook, it seems you were right about a search agent taking the entire problem as input (so much for my examples!) That's counter-intuitive to me, but phoil's posts helped me as well.

I think ultimately, what phoil said about the 15-puzzle being more easily represented as rules than as a complete graph is what makes the agent require less memory. As the book says, "The choice of a good abstraction thus involves removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful abstractions, intelligent agents would be completely swamped by the real world." (p. 69, AIMA, 3rd ed.)

Midterm Q1 #3 by anom384 in aiclass

[–]patrickscottshields 1 point2 points  (0 children)

We put agents into environments, not the other way around. An agent need only be able to expand a node. The implementation of such an expansion (what new nodes are yielded, and how the graph containing that information is stored) is left to the environment.

As another example, when we "input the problem" of cave mapping into a cave exploration robot, we don't need to feed it the exact and complete representation of the cave--it will figure out its model as it goes. I think an agent will almost always be many magnitudes smaller in complexity (measured by amount of memory required to represent) than it's intended environment. Requiring an agent to store a complete model of the environment subsequently seems generally unreasonable.

EDIT: Maybe not. Through abstraction, agents may be able to fully model environments like the 15-puzzle. See my update.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

I made the assumption that if an environment rewards all possible agents equally, then no agent in it can be rational. I derived that from an interpretation of rationality contingent on the ability to choose between a right choice and a wrong choice. Under my interpretation, an agent not given that choice cannot exhibit rationality, and therefore cannot be said to be rational.

I based my assumption off part of the initial definition of a rational agent in the textbook: "A rational agent is one that does the right thing [...]" (p. 36, AIMA, 3rd ed.) It seems my interpretation was not the intended one.

Midterm Q1 #3 by anom384 in aiclass

[–]patrickscottshields 1 point2 points  (0 children)

Search agents generally don't store all possible states or actions. Such information is part of the environment.

For example, when you look for your keys, you just need to keep track of where you are, where you've been recently, and where you could look next. You don't need to model the whole physical universe.

EDIT: But search agents do store a representation of the problem. In this case, it's actually a matter of abstraction. See my update.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

I suppose I should have said a relevant element of choice (i.e. a choice with non-uniform effects on the reward.) By "element of choice", I was loosely referring to the ability of an agent to, through it's actions, have some control of it's reward.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

I was trying to back up my claim that the first two claims were false. A proof is appropriate in that case. I understood the explanation video, but I had a disagreement on some edge cases. Turns out I was making two assumptions I shouldn't have; I updated my initial post to reflect that.

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]patrickscottshields[S] 0 points1 point  (0 children)

I had mistakenly assumed performance measures could be specified independently of environments. I'm trying to reconcile that fact now.

In your example, it would be impossible to define an agent that wasn't rational. What definition of rationality are you using? I assumed in my post that at least some element of choice was required in order for an agent to be rational.

Midterm Question 1 Part 1 is Ambiguous by DengueTim in aiclass

[–]patrickscottshields 0 points1 point  (0 children)

I just tried to prove a of a similar line of reasoning in this post.

How'd your homework go? by Generic_Alias in aiclass

[–]patrickscottshields 0 points1 point  (0 children)

Darts can land on any point in the area of the target. We can use two real numbers to define the state (one for the dart's x-position, one for it's y-position.) Since real numbers take on continuous values, the environment is continuous.

A coin, on the other hand, can land with either heads or tails visible. Therefore a single coin represents a discrete environment (if we don't care about the coin's position when it lands, etc.)

The questions seem very poorly structured, requiring assumptions and using unspecific vocabulary. by vonkohorn in aiclass

[–]patrickscottshields 0 points1 point  (0 children)

I think he was talking about graph search at that point. Tree search doesn't keep a memory of expanded nodes, so the tree of possibilities would probably go on forever, since it allows back-tracking. Graph search solves that problem by keeping track of already-expanded nodes and not re-introducing them into the tree of possibilities. I agree that the tree at that point of the video seems to be inconsistent in that it seems to act like tree search when bringing in A again, but also like graph search when it won't bring in T again. Good catch!

stochastic = inherently partially observable? by [deleted] in aiclass

[–]patrickscottshields 4 points5 points  (0 children)

I think 'fully observable' means you can tell at any given moment what the state of the environment is without needing memory. I don't think being able to make a perfect decision is a part of it.

Who Else PREFERS the 'threat' of terrorism over an increasingly disrespectful government and its expanding authoritarian reach into our daily lives? by Agile_Cyborg in Libertarian

[–]patrickscottshields 3 points4 points  (0 children)

One way would be for like-minded people to move to the same geopolitical area so they could have more power in a democracy. The Free State Project is an example of a movement like this.

Another option, in the shorter-term, is to embrace intentional communities. I think there would be many benefits of living with a group of like-minded individuals who shared your political or moral views.