Update: you can now read part two of this series.
Steering behaviours have for a long time been a gateway drug of game AI. People like this (annoyingly pluralised) technique because its components are fun and easy to write, the framework code is very simple, requiring only some vector maths knowledge, and the end result is awesomely emergent.
For the uninitiated, a steering behaviours system is typically used to decide a direction and speed for an entity to move in, although it can be generalised as selecting a direction and strength in a continuous space of any dimensionality, not necessarily just spatial. The system contains many behaviours, each of which when queried returns a vector representing a direction and strength.
These vectors are then combined in some manner. The most simple combination strategy is averaging, but there are others that don’t really change the arguments I make here.
As an example, consider an entity moving through space avoiding obstacles and chasing a target. A collision avoidance behaviour may return a vector pointing away from nearby obstacles, and the chasing behaviour will return a vector pointing towards the target. If the obstacle is behind and the target in front, the entity can move towards the target unhindered. If an obstacle is to the left of the entity’s direction of travel, it will nudge its movement vector slightly to the right, moving it away from danger. Coding behaviour like this by hand would be much more complicated.
The strength of the returned vectors is proportional to how strongly the behaviour feels about this movement. For instance, when far from a target, the chase behaviour might return a long vector, to get him back into the hunt. When very near an obstacle, the collision avoidance behaviour might return a very long vector, to overwhelm over behaviours and get the entity to react quickly.
This all sounds great, right? Steering behaviour systems can be very effective, as long as you’re using it in the right situations. It gives coherent and pleasing results when given the numerical statistical advantage to hide its flaws. A massive flock of entities moves through obstacles in a convincing manner, but inspect one of those and you’ll find it sometimes behaves erratically, and without robust collision avoidance.
After all, the collision avoidance behaviour has no direct control over entity movement, and can just suggest directions to move in. If the chase behaviour also decides on a strong result, the two may fight and collision may be unavoidable.
When creating robust behaviours that work at the macro scale, with a small number of entities, these problems become very visible. The small component-based behaviours and lightweight framework are attractive but the system doesn’t scale. You can code around the edge cases, but the previously-simple behaviours soon become complex and bloated.
Consider an example. If our chasing entity picks a target that’s directly behind an obstacle, there will come a point where the vectors from the chase behaviour and the collision avoidance behaviour will cancel each other out. The entity will stop dead, even if there’s another near-by and unobstructed target that could be picked. The chase behaviour doesn’t know about the obstruction, so will never pick the second target.
To fix this, the first thing most people will try is to have the hunting behaviour path-find or ray-cast to the target. If it’s unreachable or obscured, the behaviour can pick another target. This is successful, and your system is more robust.
However not only has your hunting behaviour become an order of magnitude more expensive, it’s also become aware that such a thing as obstacles exist. The whole point of a steering behaviours system implementation is to separate concerns, to reduce code complexity and make the system easier to maintain. However we had to break that constraint and have lost those benefits as a result.
This is the design flaw of steering behaviours. Each behaviour produces a decision, and all decisions are merged. If one behaviour’s decision (to chase a particular target) conflicts with another’s (to avoid a certain obstacle), the most intelligent merge algorithm in the world will still fail. There’s no way for it to know that two results conflict, and if there was there’s no way for it to know how to resolve the conflict successfully.
To do that the system needs not decisions, but contexts. It needs to understand how each behaviour sees the world, and only then it can produce its own correct decision.
In a context-based chasing entity, the target behaviour would return a view of the world contextualising that there are several potential targets, and how strongly the behaviour wants to chase each one. The obstacle avoidance behaviour would return a view that showed several obstacles, and how strongly the behaviour wants to avoid each one. When placed in the balanced target-behind-obstacle situation above, the obstacle and the target cancel each other out but all the other contexts remain, including other potential targets. The system can recover and choose a direction that’s sensible and coherent.
If computing and merging world contexts suggests generalised compromises and messy data structures, you’d be wrong. And I’ll tell you why in my next blog post.
What? Don’t look at me like that.
Update: continue reading the second post in this series now.