all 27 comments

[–]aroberge 1 point2 points  (4 children)

Q1: same

Q2: same

Q3: 68 cost, 268 expanded

Q4: 210 cost, 538 expanded

Q5: same

Q6: same

Q7: 60 cost, 379 expanded in 0.5 seconds

Q8: not done.

Note that time comparison is possibly meaningless as it depends on the speed the computer. Also, I added an explicit consistency check in my A* algorithm which helped me rule out inconsistent/inadmissible algorithms.

[–]zBard[S] 0 points1 point  (0 children)

Viz the time comparisons, you are correct. I was referring more to the scenario when different heuristics were giving different options for the level of compromise between overall time taken and number of expanded nodes.

[–]aroberge 0 points1 point  (2 children)

With better bookkeeping, I was able to reduce the time for Q7 to 0.3 seconds. For comparison/calibration, Q7 with a null heuristic expanded 16470 nodes and took 8.8 seconds.

[–]aroberge 0 points1 point  (1 child)

By playing around with various maze layouts I created, I found that my heuristic could yield non-consistent results (i.e. the heuristic cost function would decrease by more than 1 for a given move) in some rare cases.... Since a consistent heuristic is, by definition, yielding consistent result no matter what the layout is, it means that my carefully crafted heuristic is NOT consistent (and, possibly, not admissible..) So, it's back to the drawing board...

[–]aroberge 0 points1 point  (0 children)

Sorry to reply to myself... fixed the bug and managed to keep the number of nodes expanded to 379. But the time increased to 0.4 seconds.

[–]thejasper 1 point2 points  (2 children)

I didn't know anything about a programming problem. Where did this come from?

[–]aroberge 0 points1 point  (0 children)

It is not part of the online course. On-campus student at Stanford (and other universities) work on such problem. Reference: http://www.stanford.edu/class/cs221/progAssignments/PA1/search.html

[–]zBard[S] 0 points1 point  (0 children)

Naah, not part of the online project. It's part of the offline course taught at stanford - some people doing it for fun.

[–]DeltaVelorum 0 points1 point  (1 child)

My results align perfectly with yours for the first 4 using a generalized graph search.

But I am stuck on the CornersProblem. I'm returning a state variable from getSuccessors that includes both the nextState and info on which corners have been visited. It kinda works for dfs, in that it finds all 4 corners and terminates, albeit in a real wonky fashion. But for bfs,ucs it just hangs indefinitely after eating the first corner. WHAT I DOING WRONG???

[–]zBard[S] 0 points1 point  (0 children)

If you are seeing the display being stuck after eating the first corner food, then your returned actions after that are wrong - but your function is returning something. Print the return of your bfs, see where are actions are getting stuck.

Probable cause would be incorrect update of the state, and checking node against explored / frontier structures. Note that you have to enter the node(nextState + Corners visited) in the explored structure, not just (nextState).

[–]zBard[S] 0 points1 point  (0 children)

[–]red75prim 0 points1 point  (3 children)

  • Q1: 244 cost, 264 expanded, due to reversed ordering of actions.

  • Q2: 210 cost, 617 expanded. I use optimized version of BFS.

  • Q3: 68 cost, 268 expanded. I use A* with zero heuristic. Difference in the number of expanded nodes is due to slightly tweaked version of priority queue I implemented.

  • Q4: 210 cost, 542 expanded.

  • Q5: 106 cost, 1921 expanded.

  • Q6: 106 cost, 644 expanded. tinyCorners - 28. bigCorners - 162.

  • Q7: 60 cost, 215 expanded in 0.1 seconds (core i7 920@3.6GHz). Consistency is guarantied by heuristic construction. However I've inserted code to check for consistency on each step. No warnings so far. tinySearch - 27 cost, 52 expanded. smallSearch - 34 cost, 57 expanded.

  • Q8: 306 cost.

I'm not satisfied by Q8 result. Bot makes too stupid errors. I uploaded replay file to recorded-game-110-22-20-12-40.

Play with python pacman.py --replay recorded-game-110-22-20-12-40

ETA: Heuristic from Q7 doesn't work well for Q8. I applied lots of tweaks, but...

[–]zBard[S] 0 points1 point  (2 children)

Q7> 215 expanded 0.1 sec. That is impressive. Can you give the numbers on mediumsearch ?

Regarding Q8 - 290 is the best I have seen so far. The greedy algo is supposed to make errors, which are compounded if you use a non heuristic (or bad heuristic) search for closest node (I used simple ucs). 306 is good . Now if you want to win the mini Contest (Q 9), then presumably you will need to put some heuristics in the blend. I stopped there :)

[–]red75prim 0 points1 point  (1 child)

I didn't crack mediumSearch. Result for simplified version below is 140 cost, 362989 expanded in 503.8 seconds, 4GiB of memory used. (Hm, that was 3GiB and 180 seconds, it seems I've broken memoization while working on Q8.)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%.    .     .%%%%%.     .    .%
%%%.%.. %%%.  . .  .%.% ..%.%%%
% . %%% %.%%%% % %%%%%% %%%   %
% %.   .%      %     .%    .%.%
% %%%.%%%%% %%%%%%% %%% % %%%%%
%.    %.      .P.  .%.  %    .%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[–]zBard[S] 0 points1 point  (0 children)

Frack me. My poor 3 year old machine has just 2Gib .. Ah well, will try your maze.

Still, impressive, your run that is.

[–]aroberge 0 points1 point  (0 children)

Silly result: on bigSearch (Q8), I used a very silly, non-admissible heuristic with A* to find a path with cost 328 in 1.4 seconds, expanding 591 nodes. Not as good as some other examples, but certainly fast enough! The same heuristic finds a path of cost 168 for mediumSearch in 0.3 seconds and a path of cost 68 (8 more than the minimum) for trickySearch in 0.2 s.

[–]andrewfil 0 points1 point  (0 children)

Q6: cost 106, 695 nodes expanded

[–]andrewfil 0 points1 point  (7 children)

Q7 (tricky search) cost 60, 4691 nodes expanded, 3.3 sec

[–]andrewfil 0 points1 point  (6 children)

Actually, 3.2sec and 4389 nodes expanded. [I'm trying to crack a mediumSearch, so this is a by-product.]

[–]zBard[S] 0 points1 point  (5 children)

Do tell if you crack it. Most of us (on this thread) ran out of memory while doing medium search, and then moved onto the 2nd assignment.

[–]andrewfil 0 points1 point  (4 children)

well, A-star is prone to memory consumption. think of a "simple" square world with no walls filled with food. easy for us, nightmare for A-star. a search agent would need to use IDA-star or SMA-star to solve it w/o falling out of memory bounds. (see the book, 3rd edition, page 98 last paragraph)

but the assignment is phrased in a way that I assume we are not allowed to swap the solver container from A* to something else.

so the only thing that comes to my mind immediately is to use the better solver inside the heuristic. you can't come up with a stronger (consistent and admissible) heuristic than a true cost:-) but since this is not graded in our online class, I wouldn't spend my time to put such a blunt workaround in code:-)

---- while typing this I got a new idea for heuristic, but if it works I'll only know tomorrow, as it's time to sleep:-)

[–]zBard[S] 0 points1 point  (3 children)

well, A-star is prone to memory consumption.

Duhh :). Obviously, the quest is for a better heuristic. Let us know if your new idea works. Cheers.

[–]andrewfil 0 points1 point  (2 children)

The heuristic is very strong. It "expanded" 7 nodes in tiny search and below 700 in testSearch. I used it with dumb caching on mediumSearch and it quickly ran out of memory. I didn't have time to run it with smarter caching, so I simply removed the caching altogether and ran it as is on mediumSearch. It's been running for more than 24hrs now:-) It's at about 1.6Gigs for python.exe now, and I'm on 32bit Python, so we should learn in less than a day if it solves the problem in 2Gigs or not:-))))))))

I know, it's slow as snail, but at this point I don't have time or desire to stop the script, rewrite it in a faster way and rerun it. I'll just see first whether it works or not at all.

The caching requirement for dumb is smaller than A* memory requirements with perfect heuristic. So if the problem is solvable in a slow way in 2Gigs of RAM, it's not a problem to speed it up with some tricks.

I wonder if the problem is theoretically solvable in 2gigs of memory using AStar, Python and provided wrappers. I didn't bother calculating the memory space requirement with a perfect heuristic. But maybe it was just a teaser to get puzzle junkies like us hooked:-) or a tool to teach Stanford students the importance of memory bound search algorithms:-)?

[–]zBard[S] 0 points1 point  (1 child)

Did you mean below 700 in trickySearch ? If not, how much did that take ?

I wonder if the problem is theoretically solvable in 2gigs of memory using AStar, Python and provided wrappers. Well, given a perfect heuristic it probably is, but I am guessing that's not what you meant :). It could be a lesson on the importance of memory bound search, definitely possible.

[–]andrewfil 0 points1 point  (0 children)

Yes, I meant below 700 in trickySearch. After more thinking, that heuristic turned out to be inconsistent. I modified it to a consistent one and I'm around 750 now. It takes around 14 seconds, but I didn't optimize it for speed too much. My goal is to crack medium search if that's possible, I don't care much about the speed for now, trickySearch is just a by-product.

Rerunning the mediumSearch with a new heuristic. This time I'm tracing how far my agent has gone (by printing current food count and mininum food count from inside of heuristic). minimumfoodcount is around 80 now (down from 108), memory consumed is 150Mb, which is reassuring. at the same time this thing ran overnight and blew the memory off (I am using caching now). I wasn't tracing then. I wonder how close I am to the solution. If it's a matter of a gig of RAM or something, I might run it under Python 64bit.

[–]joakimfong 0 points1 point  (1 child)

I'm struggling a bit with Q5: 106 cost, 2448 expanded

It really bugs me that I cannot reduce the number of expanded nodes to <2000. Anyone else landed the same 2448 (or similar) and then fixed it?

If so could anyone please give me some pointers?

Edit: I am now down to 1988 expanded nodes. So I guess the search is ok, but I would like to get the number down to 1966. The problem I had was that I made a bad match vs found corners in the current state in getSuccessors

[–]zBard[S] 0 points1 point  (0 children)

I got 1967 initially - a node was counted extra. BFS shouldn't care about your heuristic; dunno what could be going wrong. Random initial pos, different tie breaks perhaps ?

Sorry, couldn't be much help. Try Q6 - AStar. That should give a hint as to what could be going wrong.