DAMN Behaviours and Context Steering

After my GDC talk, Treff on twitter sent me a link to a paper from the late 90s by a researcher called Julio K. Rosenblatt. It had some similar ideas to my context steering technique. I thought I’d discuss the differences and similarities here.

The system asks modules (behaviours) to vote for how much it prefers each decision in a set of possible decisions. Each vote is weighted according to what behaviour it came from. Votes range from -1 (against) to 1 (for). Superficially this is similar to context steering, but does not split the votes across an interest and danger map. Because of this, it suffers from the same lack of movement constraint that we see with steering behaviours. The paper gets around this by weighting avoidance behaviours much more highly, but this just ends up disabling some nice emergent behaviours, as we saw with the balanced vector problem:

Competing behaviours can cancel each other out, leading to stalemate

The merging of votes doesn’t happen at the decision space. From the diagram below, it seems like there’s some metadata about the curves used to write votes. Notice how a central curve is created from the two behaviours, rather than one small peak and one large peak. This is essentially a rasterized version of steering behaviours combined through weighted averages.

Screen Shot 2013-05-14 at 15.15.50

I think this all adds up to a rather expensive way of implementing steering behaviours. This is somewhat understandable as this paper came out just as or just before steering behaviours were starting to become popular, so the author may have been deep into his research by the time he heard of them.

There are several interesting aspects to the paper. It mentions that the behaviours all update at different frequencies, and the arbiter may receive votes at any time. This is great for those behaviours that are either low-priority or don’t change a lot, and allows easy parallelisation.

DAMN uses multiple subsystems, each asking the behaviours different questions. A speed subsystem (or “arbiter”) works out how fast to go, a Turn arbiter decides on direction, and because this is originally for controlling robots, a “field of regard” arbiter for working out where to turn the cameras. In comparison, context behaviours tend to use the maps for primarily computing a heading, then speed is calculated as a secondary factor – normally from the highest magnitude of interest or danger encountered. Splitting up like this makes for better separation of concerns, at a possible redundancy cost depending on implementation. It’s an idea worth exploring.

The paper talks about structuring behaviours using a subsumption-style approach, with high-frequency basic behaviours providing a “first level of competence”, built upon with more complex, possibly lower-frequency behaviours later. I like this way of thinking about behaviours. You can build your higher-level behaviours to be allowed to fail, knowing you’ll be caught by the lower-level systems.

There’s also some dense but potentially interesting passages that discuss methods of trying to evaluate the utility of each decision. It looks interesting but is a bit over my head. If anyone’s got any further information on what they were talking about, please share it in the comments.

In summary I don’t think there’s a lot of similarity between context behaviours and DAMN behaviours, beyond the superficial. Context behaviours could take heed of DAMN’s separation of concerns and the way polling is reversed, possibly making for better structuring of code. DAMN could do with adopting some of the simplicity of steering behaviours, or if required, the constraints and predictability of context behaviours.

Advertisements

Quickie: come see me speak at GDC

The problem I explained in my last blog post is essentially the “why” of a talk I’m giving at GDC at the end of the month. There I’ll be explaining my solution as well as showing some demos. The follow-up blog post will appear after the talk, but that’s like skipping the cinema release for the DVD; it’s just not the same! 

If you’d like to come along, it’s in the AI Summit, 3pm on the 25th, room 2004 of the West Hall.

Steering behaviours are doing it wrong

Update: you can now read part two of this series.

Steering behaviours have for a long time been a gateway drug of game AI. People like this (annoyingly pluralised) technique because its components are fun and easy to write, the framework code is very simple, requiring only some vector maths knowledge, and the end result is awesomely emergent.

For the uninitiated, a steering behaviours system is typically used to decide a direction and speed for an entity to move in, although it can be generalised as selecting a direction and strength in a continuous space of any dimensionality, not necessarily just spatial. The system contains many behaviours, each of which when queried returns a vector representing a direction and strength.

These vectors are then combined in some manner. The most simple combination strategy is averaging, but there are others that don’t really change the arguments I make here.

As an example, consider an entity moving through space avoiding obstacles and chasing a target. A collision avoidance behaviour may return a vector pointing away from nearby obstacles, and the chasing behaviour will return a vector pointing towards the target. If the obstacle is behind and the target in front, the entity can move towards the target unhindered. If an obstacle is to the left of the entity’s direction of travel, it will nudge its movement vector slightly to the right, moving it away from danger. Coding behaviour like this by hand would be much more complicated.

Visual depiction of two steering behaviour scenarios described above

The strength of the returned vectors is proportional to how strongly the behaviour feels about this movement. For instance, when far from a target, the chase behaviour might return a long vector, to get him back into the hunt. When very near an obstacle, the collision avoidance behaviour might return a very long vector, to overwhelm over behaviours and get the entity to react quickly.

behaviour results can be proportional to distance to target

This all sounds great, right? Steering behaviour systems can be very effective, as long as you’re using it in the right situations. It gives coherent and pleasing results when given the numerical statistical advantage to hide its flaws. A massive flock of entities moves through obstacles in a convincing manner, but inspect one of those and you’ll find it sometimes behaves erratically, and without robust collision avoidance.

After all, the collision avoidance behaviour has no direct control over entity movement, and can just suggest directions to move in. If the chase behaviour also decides on a strong result, the two may fight and collision may be unavoidable.

When creating robust behaviours that work at the macro scale, with a small number of entities, these problems become very visible. The small component-based behaviours and lightweight framework are attractive but the system doesn’t scale. You can code around the edge cases, but the previously-simple behaviours soon become complex and bloated.

Consider an example. If our chasing entity picks a target that’s directly behind an obstacle, there will come a point where the vectors from the chase behaviour and the collision avoidance behaviour will cancel each other out. The entity will stop dead, even if there’s another near-by and unobstructed target that could be picked. The chase behaviour doesn’t know about the obstruction, so will never pick the second target.

Competing behaviours can cancel each other out, leading to stalemate

To fix this, the first thing most people will try is to have the hunting behaviour path-find or ray-cast to the target. If it’s unreachable or obscured, the behaviour can pick another target. This is successful, and your system is more robust.

However not only has your hunting behaviour become an order of magnitude more expensive, it’s also become aware that such a thing as obstacles exist. The whole point of a steering behaviours system implementation is to separate concerns, to reduce code complexity and make the system easier to maintain. However we had to break that constraint and have lost those benefits as a result.

This is the design flaw of steering behaviours. Each behaviour produces a decision, and all decisions are merged. If one behaviour’s decision (to chase a particular target) conflicts with another’s (to avoid a certain obstacle), the most intelligent merge algorithm in the world will still fail. There’s no way for it to know that two results conflict, and if there was there’s no way for it to know how to resolve the conflict successfully.

To do that the system needs not decisions, but contexts. It needs to understand how each behaviour sees the world, and only then it can produce its own correct decision.

In a context-based chasing entity, the target behaviour would return a view of the world contextualising that there are several potential targets, and how strongly the behaviour wants to chase each one. The obstacle avoidance behaviour would return a view that showed several obstacles, and how strongly the behaviour wants to avoid each one. When placed in the balanced target-behind-obstacle situation above, the obstacle and the target cancel each other out but all the other contexts remain, including other potential targets. The system can recover and choose a direction that’s sensible and coherent.

If computing and merging world contexts suggests generalised compromises and messy data structures, you’d be wrong. And I’ll tell you why in my next blog post.

What? Don’t look at me like that.

Update: continue reading the second post in this series now.