Remote Shepherd calls for an AI agent that searches for another AI agent by randomly visiting places in the environment, checking for his quarry, then executing a different behavior (we'll call it "Behavior B") if he finds him. The first two steps repeat until the quarry is found and Behavior B is executed.
At first I attempted to do the whole thing with a behavior tree. That ran into some problems. Here's the behavior tree I tried to use:
The root node is irrelevant to this post. Nodes B and C comprise the search behavior. Specifically, B was the problem child. My attempt at it was based on my usual implementation of wander-like behaviors: pick a random goal point, set a flag to indicate that we have a goal, move towards the goal, when we reach the goal set the flag to false so that the next time this node rolls around we pick another goal. Because the relevance functions are evaluated before the execute functions in behavior tree nodes, and because B's relevance function needs a goal to check against, the goal choosing has to happen in the relevance function of B. Normally I put wander-like behaviors as fall-through behaviors, and this approach works fine. But when there are behaviors that must be reached after arriving at the goal, well, you might already see why the behavior tree would never reach C. If you don't, consider this sequence of events:
- The agent reaches B for the first time. It picks a goal and sets the flag to true.
- The agent continually hits B because they are not at their goal and move toward the goal.
- The agent reaches the goal, the flag is set to false so that it can grab a new goal next time around.
- The behavior tree evaluates the relevance function of B as it comes around again, and since the flag is false it picks a new goal, and is logically not at that new goal. The relevance function evaluates to true.
- C is left out of the party.
The solution was to remove B and C and replace them with a simple finite state machine that looks something like this:
A finite state machine doesn't bother at all with any states other than the current state. That means that when the agent reaches the first goal it sets the flag to false, similar to node B, but also sets the current state to its yes path: "Check for quarry". The next time this finite state machine is run it runs the "Check for quarry" state, ignoring the "Move toward random goal" state, thus it doesn't pick a new goal until it becomes the current state again, after the "Check for quarry" state. Once the quarry is found it toggles a flag in the parent agent, and node A's relevance function evaluates to true, the behavior tree stops evaluating before it reaches the finite state machine again, and we're back to normal behavior tree stuff. Because the behavior tree stops before it gets to the finite state machine again it doesn't matter that it dead ends once the quarry has been found.
In practice I folded "Set quarryFound var to TRUE" into the "Check for quarry | Quarry found?" couplet's yes path. While it may seem odd to place what is essentially an action into the decision section, I thought it would be even odder to make an entirely new state for one line of assignment code.