After my GDC talk, Treff on twitter sent me a link to a paper from the late 90s by a researcher called Julio K. Rosenblatt. It had some similar ideas to my context steering technique. I thought I’d discuss the differences and similarities here.
The system asks modules (behaviours) to vote for how much it prefers each decision in a set of possible decisions. Each vote is weighted according to what behaviour it came from. Votes range from -1 (against) to 1 (for). Superficially this is similar to context steering, but does not split the votes across an interest and danger map. Because of this, it suffers from the same lack of movement constraint that we see with steering behaviours. The paper gets around this by weighting avoidance behaviours much more highly, but this just ends up disabling some nice emergent behaviours, as we saw with the balanced vector problem:
The merging of votes doesn’t happen at the decision space. From the diagram below, it seems like there’s some metadata about the curves used to write votes. Notice how a central curve is created from the two behaviours, rather than one small peak and one large peak. This is essentially a rasterized version of steering behaviours combined through weighted averages.
I think this all adds up to a rather expensive way of implementing steering behaviours. This is somewhat understandable as this paper came out just as or just before steering behaviours were starting to become popular, so the author may have been deep into his research by the time he heard of them.
There are several interesting aspects to the paper. It mentions that the behaviours all update at different frequencies, and the arbiter may receive votes at any time. This is great for those behaviours that are either low-priority or don’t change a lot, and allows easy parallelisation.
DAMN uses multiple subsystems, each asking the behaviours different questions. A speed subsystem (or “arbiter”) works out how fast to go, a Turn arbiter decides on direction, and because this is originally for controlling robots, a “field of regard” arbiter for working out where to turn the cameras. In comparison, context behaviours tend to use the maps for primarily computing a heading, then speed is calculated as a secondary factor – normally from the highest magnitude of interest or danger encountered. Splitting up like this makes for better separation of concerns, at a possible redundancy cost depending on implementation. It’s an idea worth exploring.
The paper talks about structuring behaviours using a subsumption-style approach, with high-frequency basic behaviours providing a “first level of competence”, built upon with more complex, possibly lower-frequency behaviours later. I like this way of thinking about behaviours. You can build your higher-level behaviours to be allowed to fail, knowing you’ll be caught by the lower-level systems.
There’s also some dense but potentially interesting passages that discuss methods of trying to evaluate the utility of each decision. It looks interesting but is a bit over my head. If anyone’s got any further information on what they were talking about, please share it in the comments.
In summary I don’t think there’s a lot of similarity between context behaviours and DAMN behaviours, beyond the superficial. Context behaviours could take heed of DAMN’s separation of concerns and the way polling is reversed, possibly making for better structuring of code. DAMN could do with adopting some of the simplicity of steering behaviours, or if required, the constraints and predictability of context behaviours.