[I left this article up so that I don’t break URLs but it’s a really old post on a newsgroup and I am not the original author. --amitp]
From rec.games.programmer:
Warning: There are two entirely different sorts of problems that fit the original statement--different in the sense that entirely different sorts of algorithms are “good”.
On a semi-sparsely populated grid (there’s no absolute cut-off here), you probably want to use what I think is called an A* search. The basic characteristic is that it’s guaranteed to find the shortest path (if you use a lower-bound as the heuristic), and that it’s one of the fastest ways to find it guaranteed.
On the other hand, if it’s very filled, e.g. a maze, then you just have the maze-solving problem, and there are various possible solutions.
If it’s a REAL maze, i.e. there’s only one route to the destination, then my answer isn’t very interesting, because the problem really isn’t.
Generally, when faced with the problem you have, I use one or more items from my “bag of tricks”, depending on how maze-like the thing is.
- Distance table:
If you have multiple objects moving towards the same destination, you may really want to consider this: you do a breadth-first search of the map, recording the distance of every place on the map from the destination a distance table. Then, from any given spot on the map, you find out what distance you are, and check which adjacent square is closer. This (as phrased so far) returns “perfect” routes.
I’ve often used this technique (well, twice) for games where several “enemies” are moving towards the “player”. Since the player can move, however, you have to recalculate the special array every frame. Since an imperfect answer may be acceptable, there are ways to handle partial updates (as the player moves) iteratively with some efficiency.
If people are interested I can give more details.
- Routing tables:
Subdivide the map into “regions”. Precompute a “hub” for each region. Precompute the shortest route from any hub to another (probably just by recording routes to adjacent hubs, and building longer routes by stringing them together). Also, keep routing information on how to get from any square to the hub for that region (using the distance table is one way, but a table of directions is easier).
Now, to get from A to B, you first go to the hub nearest A, then you use the routing table to get to the hub nearest B. Then you compute the route from B to its hub, and work backwards from it.
This can cause really odd movements when there’s a shorter route; for instance, if B is closer to A then B’s hub, you can go right past B. That has an easy fix--compute the route from B to its hub, and check if you run into it while going from A to B’s hub. Of course, that’ll only detect the “shortest route”... well, it’s not important.
- Depth first search
A depth-first search involves trying a single route at a time, keeping a stack of “routes tried so far”. In the case of a maze, this usually involves going in some direction in the maze, and pushing onto a state stack if you reach an intersection.
- Breadth first search
With breadth first, you compute all of the squares which are ‘n’ away from A, where n goes from 1 to infinity, until you find B. You just keep a list of all the squares that are that distance away, and generate the next list by adding all squares adjacent to ones in the current list, which haven’t been considered yet. (You basically build the distance table discussed before.)
This can be made to run a bit faster usually. Imagine the case that your grid is relatively sparse. Suppose A and B are about n grid squares apart. Then the breadth first search will examine rougly (2n)^2 map squares (a square of “radius” n around A).
To speed it up, simultaneously do a breadth first search out from both A and B. Wherever they meet is your path; you’ll have examined n^2 map squares around each of A and B, giving you twice the speed.
I’ve never bothered doing either of these; I’ve always done a breadth search from B, computing the distance table, since that can be reused.
- A*
Basically, this is a twisted variant of a depth-first search. What you do is have a “hueristic” which tells you the minimum possible distance from a given point to the destination. Now, do a depth-first walk of the map, but where normally you would say, “ok, I can go to either X or Y next, so I’ll push Y on the stack and try X”, instead, you do this: “Well, I’ve gone n steps so far. I can go to X or Y. The heuristic says X is x units away, and Y is y units away from the destination. So, at best, a path through X is going to be x+n long, and through y it’ll be y+n long. So I’ll pick whichever one might be shorter.” The REAL twist is that you also have to consider everything else on the stack in picking the shorter one. If you do, then you’re guaranteed to find the shortest path. Of course, it won’t be a stack then, just a sorted list, at which point it looks a lot more like a breadth-first search, except the weightings are different.
It is interesting to consider what humans do, as well. In some situations, people will actually use “hub” sorts of approaches. For example, when driving, people going from A to B may drive from A to somewhere closer to where they live and then to B, because that’s the only way they know.
I think a relatively accurate approach to how humans probably work would be to do something that mixes them all, like this: Compute routing tables and hubs. Suppose A’s hub is a, and B’s hub is b. Suppose the “obvious” route is this:
A -> a -> c -> d -> e -> b -> B ^^^ ^^^ ^^^ intermediate hubs
Then, the route to take is the following:
- Compute a perfect route from A to c.
- Follow routing tables to e.
- Compute a perfect route from e to B.
This will sometimes find silly routes, under certain situations, so it’s not that human. But, then again, humans make mistakes, and would only find a close-to-perfect route most likely.
Sean Barrettnot a professional game programmer [yet]