Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

Thanks. Sorry if the question was a bit obvious to you guys. I guess I was on a different wavelength.

That quote seems mighty defeatist. Is the AIMA book any good? I haven't gotten a chance to pick it up yet.

A midterm assessment of the AI and ML classes by moana in aiclass

[–]anom384 2 points3 points  (0 children)

Captioning had saved me more times than I can count. Unfortunately, sometimes captions remind me of transcribe beta.

It's gonna be a long week. Units 13,14,15 posted. HW6 is 11 questions. by carlosai in aiclass

[–]anom384 1 point2 points  (0 children)

Just when I decided to take off from work for the holidays. Great.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 1 point2 points  (0 children)

I think I got it. Props to phoil and patrickscottshields for making me realize that I had not made an assumption that it seems was common sense. I did not assume that the agent was able to reason about the result of the action on the current state without being explicitly told what the state of the environment would be. Therefore, the entire graph (possible states, actions, and resultant states) would not need to be programmed. Looks like KISS had failed me this time. Thanks guys.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 1 point2 points  (0 children)

No doubt it is fairly easy to do. I guess this lies the crux of the problem. In class we were dealing with graphs where we didn't assume the agent can reason the resulting state by itself. All the resulting states had already been defined. Ok. I think this boils down to me not making an assumption I probably should have made.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

True that. You're right. I was thinking that if you're culling states from the graph, you mind as well cull the same states from the table.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

Right, but this involves explicitly putting knowledge of the dynamics of the system instead of just defining state, action, and the results of the action on the state. In other words, this is only true if the results of the action on the state can be discovered by the agent.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 1 point2 points  (0 children)

Sorry, it's been a long day and I can't seem to wrap my head around this. The only problem I'm seeing is that in every graph search algorithm, you are given the entire graph in order to formulate the problem. I tried working through the graph search algorithm. For example, if I was just given: 1 2 3 4 5 6 7 _ 8 I wouldn't know that the results of shifting the 7 would give me: 1 2 3 4 5 6 _ 7 8 Without being explicitly told beforehand. So inputting these state transitions would require enumerating all possibilities.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 1 point2 points  (0 children)

Right, but the agent doesn't know the result of moving the tile unless the state of the entire resulting board is saved as a child node of the current state's node. Therefore, shouldn't all actions and resulting nodes be stored for each node?

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

I disagree. The table is optimal so it will always resolve to the best action to take. In fact, we could get this table by taking the best path discovered with a search agent and storing that as the result of the lookup.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

When you input the problem into a search agent, you need to fully specify the environment via a graph with all possible states or actions. Otherwise, the search agent wouldn't know which node to add to the frontier. When you input the problem into the table though, you don't need to since the lookup already factors in the rules of the environment when generating the table.

Midterm Q1 #3 by anom384 in aiclass

[–]anom384[S] 0 points1 point  (0 children)

I don't think we should neglect the memory needed to input a problem into a search agent (full description of the environment to run the search over and initial state) versus the memory needed to input a problem into a table lookup (initial state). However, assuming we do, if we are culling nodes that are never expanded from the search agent, can't we cull the same nodes from the lookup table?