Update: you can now read part two of this series.
Steering behaviours have for a long time been a gateway drug of game AI. People like this (annoyingly pluralised) technique because its components are fun and easy to write, the framework code is very simple, requiring only some vector maths knowledge, and the end result is awesomely emergent.
For the uninitiated, a steering behaviours system is typically used to decide a direction and speed for an entity to move in, although it can be generalised as selecting a direction and strength in a continuous space of any dimensionality, not necessarily just spatial. The system contains many behaviours, each of which when queried returns a vector representing a direction and strength.
These vectors are then combined in some manner. The most simple combination strategy is averaging, but there are others that don’t really change the arguments I make here.
As an example, consider an entity moving through space avoiding obstacles and chasing a target. A collision avoidance behaviour may return a vector pointing away from nearby obstacles, and the chasing behaviour will return a vector pointing towards the target. If the obstacle is behind and the target in front, the entity can move towards the target unhindered. If an obstacle is to the left of the entity’s direction of travel, it will nudge its movement vector slightly to the right, moving it away from danger. Coding behaviour like this by hand would be much more complicated.
The strength of the returned vectors is proportional to how strongly the behaviour feels about this movement. For instance, when far from a target, the chase behaviour might return a long vector, to get him back into the hunt. When very near an obstacle, the collision avoidance behaviour might return a very long vector, to overwhelm over behaviours and get the entity to react quickly.
This all sounds great, right? Steering behaviour systems can be very effective, as long as you’re using it in the right situations. It gives coherent and pleasing results when given the numerical statistical advantage to hide its flaws. A massive flock of entities moves through obstacles in a convincing manner, but inspect one of those and you’ll find it sometimes behaves erratically, and without robust collision avoidance.
After all, the collision avoidance behaviour has no direct control over entity movement, and can just suggest directions to move in. If the chase behaviour also decides on a strong result, the two may fight and collision may be unavoidable.
When creating robust behaviours that work at the macro scale, with a small number of entities, these problems become very visible. The small component-based behaviours and lightweight framework are attractive but the system doesn’t scale. You can code around the edge cases, but the previously-simple behaviours soon become complex and bloated.
Consider an example. If our chasing entity picks a target that’s directly behind an obstacle, there will come a point where the vectors from the chase behaviour and the collision avoidance behaviour will cancel each other out. The entity will stop dead, even if there’s another near-by and unobstructed target that could be picked. The chase behaviour doesn’t know about the obstruction, so will never pick the second target.
To fix this, the first thing most people will try is to have the hunting behaviour path-find or ray-cast to the target. If it’s unreachable or obscured, the behaviour can pick another target. This is successful, and your system is more robust.
However not only has your hunting behaviour become an order of magnitude more expensive, it’s also become aware that such a thing as obstacles exist. The whole point of a steering behaviours system implementation is to separate concerns, to reduce code complexity and make the system easier to maintain. However we had to break that constraint and have lost those benefits as a result.
This is the design flaw of steering behaviours. Each behaviour produces a decision, and all decisions are merged. If one behaviour’s decision (to chase a particular target) conflicts with another’s (to avoid a certain obstacle), the most intelligent merge algorithm in the world will still fail. There’s no way for it to know that two results conflict, and if there was there’s no way for it to know how to resolve the conflict successfully.
To do that the system needs not decisions, but contexts. It needs to understand how each behaviour sees the world, and only then it can produce its own correct decision.
In a context-based chasing entity, the target behaviour would return a view of the world contextualising that there are several potential targets, and how strongly the behaviour wants to chase each one. The obstacle avoidance behaviour would return a view that showed several obstacles, and how strongly the behaviour wants to avoid each one. When placed in the balanced target-behind-obstacle situation above, the obstacle and the target cancel each other out but all the other contexts remain, including other potential targets. The system can recover and choose a direction that’s sensible and coherent.
If computing and merging world contexts suggests generalised compromises and messy data structures, you’d be wrong. And I’ll tell you why in my next blog post.
What? Don’t look at me like that.
Update: continue reading the second post in this series now.
I like the idea, but still the solution you propose leaves me a little confused, especially when we decide to escale up the number of entities, wouldn’t that be very very high cost ???
You’re absolutely right, it is expensive. But with a large number of objects you hopefully have the statistical scale to get away with the inconsistencies of traditional steering behaviours, so my complaints go away.
Also: thanks for taking the time to read and comment! You’re my first ever commenter!
But having not so exact steering behaviors in exchange of precious resources sounds like a good deal to me, why risk it, how much of a difference would that make for the player
(Don’t mention it! AI is like my favorite thing and this showed up in my twitter feed! 😀 )
All depends on your application, of course. I used my proposed fix in a racing game where speeds were so high collisions would have been catastrophic for the cars involved. Another racing game I know of used a simple finite state machine, but for their genre it was absolutely acceptable.
But my proposed fix is about more than just collision avoidance – the behaviours are very simple and easy to balance, yet the emergent properties are more complex than standard steering behaviours. I’ll explain more in my next post.
If you’re on reddit please consider submitting the next post to http://www.reddit.com/r/gameai – I’m interested in reading about your approach and I expect others will be as well.
Thanks – I will do.
Thanks Andy. Having never had to write any AI code for the games I have worked on I was only really aware of the existence of steering behaviours as a concept. The article does a very good job of explaining the principles of the technique. This might be showing off but I must say I realised fairly quickly where the approach would run in to problems and that there would need to be certain restrictions lifted for it to actually work in practice. I’m certainly looking forward to your next blog post to see how you intend to address the problems.
I’m anxiously awaiting your followup article.
The view steering behaviors as a low-level command. Perhaps as low level as supplying the force vector for the physics integrator.
Some more intelligent code selects the currently active steering behavior for the agent. Something like a behavior tree or (I presume) these context maps. This code need not update at the same rate as the physics or the graphics. Heck, _I_ certainly don’t update that fast.
Apologies for the delay in the follow-up article: a new job means I haven’t even started it yet. 😦
You could certainly move the target selector into a higher system, but that really only moves the problem. That higher system still needs to choose targets that are reachable, so the obstacle avoidance again bleeds into the target selector, even if you run it at a lower framerate. If you have multiple behaviours your higher-level system quickly becomes a megaclass.
Well, the obstacle avoidance vector you are using in the last example is not exactly right… That one is the ‘Flee’ vector (run away from the obstacle). An avoidance vector in that case could be perpendicular to the vector towards the obstacle (choosing the perpendicular vector that leads closer to the target). That way the obstacle would be avoided.
In any case, you have your point: they are not a universal solution and they need context (or at least a nice finite state machine that changes goals)
Andrew, I just saw your posts today, via a student in a Game AI class where I recently gave a talk on steering behaviors.
While I liked the second post in this series, this one felt like a strawman argument. Steering behaviors have been in wide use for about 27 years, which suggests that the problems you describe are not fundamental, and may in fact be related to your own implementation. I am glad you were led to find a useful enhancement, as described in part two, but does not support your argument that simple steering behaviors are weak or ineffective.
For example, as PlayMedusa points out above, your third figure is simply wrong. The obstacle avoidance steering behavior returns a vector that is perpendicular to the heading of the agent. It is described that way in the 1999 GDC paper, and the obstacle avoidance behaviors used in Breaking the Ice in 1987 worked that way. (As does the reference implementation in OpenSteer.) So you certainly would not get a zero length result. The path would look a lot like the right hand example in your first figure. Depending on your cost and reward metrics, choosing the much closer target is probably the preferred behavior.
You say “the collision avoidance behaviour has no direct control over entity movement” but in fact it could have if you had not begun by brushing off steering combination techniques beyond simple averaging. A very common approach is to use steering behaviors in the context of a simple decision tree. Start by calculating obstacle avoidance. If the result is nonzero, then return that as the combined steering force. If it was zero, meaning no potential collisions within your prediction horizon, then you move on to compute the non-obstacle-avoidance portion of your steering force (goal seeking for example). Between this hard-edged conditional steering and simple averaging are a wide range of more subtle and agile ways to combine simple steering behaviors.
About that “annoyingly pluralised” name, as you said “The system contains many behaviours…” The 1999 paper used the plural in the title for exactly this reason, that the technique involves a toolkit of simple components meant to be combined together to create more interesting and complex results.
Best regards,
Craig Reynolds
It’s an honour to have you comment, Craig. Thank you for taking the time.
I’ve addressed both your comments together.
First of all understand that despite my hyperbole, I think steering behaviours are a wonderfully elegant and practical solution to many problems, and I am thankful you created them. I never wanted to suggest they’re weak or ineffective in anything but very specific circumstances.
The distinction I stumbled upon in the second post was that, when the entities are designed not to be observed as a flock, but as individuals, inconsistencies in steering can become visible. I’ll return to this point later.
You’re correct that the obstacle avoidance steering behaviour is presented incorrectly, but I think it’s a simplification, rather than a straw man. The point here isn’t exactly how obstacle avoidance works, just that it’s possible for two behaviours to reach equilibrium. I could have presented obstacle avoidance correctly, but I decided to make the blog post as accessible as I could to people unfamiliar with steering behaviours, and I didn’t think it changed the argument substantially.
You’re also right that a priority or decision tree control system would have avoided a equilibrium situation, but this would trade one inconsistency for another. The desirable result would be for the entity to move towards the target that was not blocked by an obstacle. With a decision tree, the entity moves away from the obstacle but no longer makes *any* decisions about moving towards targets. In the last image of the post these results are actually the same, but it’s trivial to construct a situation where they’re not, and the entity acts inconsistently.
(It could be argued that this is a balancing issue, but that can be a non-trivial matter. In one racing project I worked on the cars were so fast, so closely packed, and so fragile, that they could not effectively allow any behaviour except collision avoidance to run. I spent a lot of time finding the parameters describing the situations where it was OK to not run avoidance, so the more adventurous behaviours could run. My context steering solution fixes this problem completely by allowing all behaviours to run all of the time.)
Returning to the blog posts’s example, if we could merge the furthest-target vector and the obstacle vector somehow, we’d get a good solution. But the only way to do that is to disable the decision tree again and add collision knowledge inside the target selection behaviour. At best this is redundant coupling, at worst it’s some very hacky code. The obstacle avoidance behaviour should be the one to care about obstacles, not the target selection behaviour.
That is why I brushed off combination techniques. They don’t change my fundamental argument – that steering behaviours return solutions, without the context required to merge them with integrity. I actually think part of the beauty of steering behaviours stems from the fact that their elegance and weakness are two sides of the same coin.
You make a good point about performance comparisons. In the project I used context steering in, they ended up being substantially more expensive than the system they replaced for exactly the reason you make. However this might not be a universal truth – calculating many vectors from each behaviour is more expensive than just one, but because of the way the behaviours combine you could drastically reduce the number of necessary path-finds or raycasts. YMMV. Either way I shouldn’t have claimed performance advantages for context steering.
And for exactly the reason you state, I am yet to think of a practical way to make context steering work in three dimensions. That’s been made clear in all the talks I’ve given on this, but I should have done the same in the blog posts.
Andrew, thanks for your reply. Sometimes for illustrative purposes we use a simple example, but think of it as a proxy for a more complicated example. You said “The desirable result would be for the entity to move towards the target that was not blocked by an obstacle.” To me Figure 3 suggests a situation like this: I’m standing just to the east of a tree. I know there is a pot of goal just to the west of the tree and another pot of gold twice as far to the east. In that scenario I would walk around the tree to the nearest goal. If you and I can’t agree on the correct behavior for a simple case like this, what chance is there that a couple of lines of vector arithmetic will make the “correct” choice? 🙂
I think “target selection” is a separate problem and not part of generic steering behaviors. The basic Seek steering behavior is applicable only after a single goal has been identified. I would put target selection under the control of something like that decision tree for a given application-specific agent in a given scenario. It might be asking questions like: are there any interesting goals near enough to target?, if more than one, which is best?, etc. For fast-moving agents, the key question is often: how close is a target to my predicted future path?
Simplifying examples for pedagogical reasons is fine, but those same readers unfamiliar with steering behaviors might then confuse them with phenomena like electrostatic attraction and repulsion of inert bodies. A concise description of how steering behaviors differ is that they compute a corrective feedback to the “steering error” — the difference between an agent’s current velocity and its desired velocity. So electrostatic forces are radial “central forces” while steering behaviors usually are not.
Cheers,
Craig
that’s why the steering behaviors are multiplied by weights.