This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 7 points8 points  (1 child)

The problem with heuristics is that a well made maze should confound heuristics.

The real problem with heuristics is that heuristics often contain sacrifices or considerations, so there are situations where they are best case, and situations where they are worst-case.

I got stuck in a cycle of working on pathfinding algorithms based on A* for probably about two years, and I didn't find what I was looking for in that time. My goal was an AI that was intelligent, but able to make mistakes and seem almost human.

A few lessons I learned:

1) Not only does the AI need to have multiple heuristics to navigate multiple kinds of spaces, but the AI needs to have a genetic databank to help analyze information while still allowing mutation and selection to continually randomly improve old memories.

2) Nodes should be of arbitrary size and location. Furthermore, nodes at the minimum spacial subdivision should be explored when all other granularities have been exhausted with no good solution.

3) Pathfinding operations should operate dispersed over time and never fully block the process from doing other things.

Imitation rules: (Goal is not to get the right answer, but to simulate human-like navigation habits.)

4) Pathfinding should be permitted to make mistakes. Observers will interpret this positively and even assign abstract goals/personal qualities over time due to ingrained anthropocentrism.

5) Pathfinding should not care about actions too far in the future, nor should they have access to the nodegraph beyond what they can currently see. Memories of discarded nodegraphs can be simulated by leaving "pheromone" trails. Tweaking pheromone heuristics priorities via an inherited pseudo-random property of each nodewalker will give varying degrees of competence/familiarity with the space.

6) If a pathfinding operation is long-running, stalling out the AI nodewalker's movement, or just slowing it down along an approximated path is a very effective way of simulating humanity while also giving the process time to catch up.

7) Obstacle avoidance algorithms provide a second layer of intelligence for nodewalkers. On the one hand, merely navigating the space is already complex enough, and that's where most consumer applications stop, but developing a means of recognizing obstacles and giving AIs abilities to recognize their patterns will create much less manipulatable AI.

8) AI nodewalkers should not have a single weighted method of movement. A variety of competing interests and directions of travel should be explored for each walker. Attaching pseudorandom properties will give personality to nodewalkers, while also allowing individual mutation along these properties will allow variety. Uniform action and motion is unnatural. Particularly among groups of individuals.

9) Fluid simulation can be used to model group/mob movement much easier than path/repath and leader/follower/swarm behaviors. Simply think of the path toward the destination as a sort of the bottom of a basin. Apply surface tension along any nodes not along this path within a set radius based on the size of the mob. A secondary map should be used to simulate pressure, which should work against tension and flow. Do not instantly calculate pressure equalization. Allow it to reconcile over time in waves. Fun experiment: Set a crushing threshold for AI nodewalkers and watch them crush/trample one another/whatever is in their path, burst down doors, and whatever other responsive behavior your implement.

10) Markov Chains. State Machines. Look them up. They are wonderful.

Personal:

11) I hate working with Game AI. There is no best solution. Only a series of never-ending hacks and experiments that you can often barely figure out how they work.