Smoke-testing scenes, using Jenkins and Unity Test Framework

You may be familiar with Unity’s Test Runner window, where you can execute tests and see results. This is the user-facing part of the Unity Test Framework, which is a very extensible system for running tests of any kind. At Roll7 I recently set up the test runner to automatically run simple smoketests on every level of our (unannounced) game, and made jenkins report on failures. In this post I’ll outline how I did the former, and in part two I’ll cover the later.

Play mode tests, automatically generated for every level
Some of our [redacted] playmode and editmode tests, running on Jenkins

(I’m going to assume you have passing knowledge of how to write tests for the test runner)

(a lot of this post is based on this interesting talk about UTF from its creators at Unite 2019)

The UTF is built upon NUnit, a .net testing framework. That’s what provides all those [TestFixture] and [Test] attributes. One feature of NUnit that UTF also supports is [TestFixtureSource]. This attribute allows you to make a sort of “meta testfixture”, a template for how to make test fixtures for specific resources. If you’re familiar with parameterized tests, it’s like that but on a fixture level.

We’re going to make a TestFixtureSource provider that finds all level scenes in our project, and then the TestFixutreSource itself that loads a specific level and runs some generic smoke tests on it. The end result is that adding a new level will automatically add an entry for it to the play mode tests list.

There’s a few options for different source providers (see the NUnit docs for more), but we’re going to make an IEnumerable that finds all our level scenes. The results of this IEnumerable are what gets passed to our constructor – you could use any type here.

class AllRequiredLevelsProvider : IEnumerable<string>
{
    IEnumerator<string> IEnumerable<string>.GetEnumerator()
    {
        var allLevelGUIDs = AssetDatabase.FindAssets("t:Scene", new[] {"Assets/Scenes/Levels"} );
        foreach(var levelGUID in allLevelGUIDs)
        {
            var levelPath = AssetDatabase.GUIDToAssetPath(levelGUID);
            yield return levelPath;
        }
    }
    public IEnumerator GetEnumerator() => (this as IEnumerable<string>).GetEnumerator();
}

Our TestFixture looks like a regular fixture, except also with the source attribute linking to our provider. Its constructor takes a string that defines which level to load.

[TestFixtureSource(typeof(AllRequiredLevelsProvider))]
public class LevelSmokeTests
{
    private string m_levelToSmoke;
    public LevelSmokeTests(string levelToSmoke)
    {
        m_levelToSmoke = levelToSmoke;
    }

Now our fixture knows which level to test, but not how to load it. TestFixtures have a [SetUp] attribute which runs before each test, but loading the level fresh for each test would be slow and wasteful. Instead let’s use [OneTimeSetup] (đź‘€ at the inconsistent capitalisation) and to load and unload our level for each fixture. This depends somewhat on your game implementation, but for now let’s go with UnityEngine.SceneManagement:

// class LevelSmokeTests {
    [OneTimeSetUp]
    public void LoadScene()
    {
        SceneManager.LoadScene(m_levelToSmoke);
    }

Finally, we need some tests that would work on any level we throw at it. The simplest approach is probably to just watch the console for errors as we load in, sit in the level, and then as we load out. Any console errors at any of these stages should fail the test.

UTF provides LogAsset to validate the output of the log, but at this time it only lets you prescribe what should appear. We don’t care about Debug.Log() output, but want to know if there was anything worse than that. Particularly, in our case, we’d like to fail for warnings as well as errors. Too many “benign” warnings can hide serious issues! So, here’s a little utility class called LogSeverityTracker, that helps check for clean consoles. Check the comments for usage.

Our tests can use the [Order] attribute to ensure they happen in sequence:

// class LevelSmokeTests {
    [Test, Order(1)]
    public void LoadsCleanly()
    {
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(2)]
    public IEnumerator RunsCleanly()
    {
        // wait some arbitrary time
        yield return new WaitForSeconds(5);
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(3)]
    public IEnumerator UnloadsCleanly()
    {
        // how you unload is game-dependent 
        yield return SceneManager.LoadSceneAsync("mainmenu");
        m_logTracker.AssertCleanLog();
    }

Now we’re at the point where you can hit Run All in the Test Runner and see each of your levels load in turn, wait a while, then unload. You’ll get failed tests for console warnings or errors, and newly-added levels will get automatically-generated test fixtures.

More tests are undoubtedly more useful than less. Depending on the complexity and setup of your game, the next steps might be to get the player to dumbly walk around for a little bit. You can get a surprising amount of info from a dumb walk!

In part 2, I’ll outline how I added all this to jenkins. It’s not egregiously hard, but it can be a bit cryptic at times.

Adding a custom movement mode to Unreal’s CharacterMovementComponent via blueprints

This isn’t a full tutorial, and I’m not an expert, but I noticed this knowledge wasn’t really collected together anywhere, so I’m putting something together here. Please shout if there’s any holes or mistakes.

The CharacterMovementComponent that comes with the third-person starter kit has several movement modes you can switch between using the Set Movement Mode node. Walking, falling, swimming, and flying are all supported out-of-the-box, but there’s also a “Custom” option in the dropdown. How do you implement a new custom movement mode?

First, limitations: I’ve not found a way to make networkable custom movement modes via blueprint. I think I need to be reusing the input from Add Movement Input, but I’m not yet sure how. Without doing that, the server has no idea how to do your movement.

When you set a new movement mode, the OnMovementModeChanged event (which is undocumented??) gets fired:

movementmodechanged

At this point you can toggle state or meshes, zero velocity, and other things you might want to do when entering and leaving your custom mode.

The (also undocumented) UpdateCustomMovement event will fire when you need to do movement:

updatemovementmode.PNG

From here you can read your input axis and implement your behaviours. You can just use the delta and Set Actor Location, but there’s also the Calc Velocity node which can help implement friction and deceleration for you.

To return to normal movement again, I’ve found it’s safest to use Set Movement Mode to enter Falling, and let the component sort itself out, but ymmv there.

Hope someone finds this helpful.

Melbourne Games Week

Oh yeah I have a blog, don’t I.

I’m going to be in Australia for the week of the 31st October 2016. I’ll be appearing in the Unite Melbourne (https://unite.unity.com/2016/melbourne) keynote and doing an Alphabear post mortem at 5.30pm in Room 1, both on the 31st. I think the former might be streamed live? But I haven’t got a link yet.

I’m also doing a Fireside Chat at GCAP at 2.10pm on November 2nd with Laura Voss: http://gcap.com.au/sessions/laura-voss-talks-to-andrew-fray/. I’m excited about this because I don’t really know what to expect, but hopefully I can entertaining and informative. Enterformative.

I’m leaving late on the Friday, but have no other firm plans. If you’d like to meet up, let me know here or on twitter!

How did I get into AI gamedev?

A computer science graduate got in touch with me via an academic friend, interested to know how they could become a games industry AI programmer. They seemed interested in what kind of masters course would help their career. Although that’s definitely possible, it’s not the route I took. My full answer follows:

I’m not currently an AI programmer, but I was one in a former life, on games like Operation Flashpoint Dragon Rising and F1 2010/2011. To pimp myself even more, I’ve spoken at GDC’s AI Summit, been profiled on AIGameDev.com and published a chapter in Game AI Pros 2.

The simple answer to your question is that it’s really hard to get in as a graduate AI programmer. I don’t think specific courses would help as much as devouring sites like AIGameDev.com. Infact their sister site nucl.ai has a good intro-to-AI online course that goes beyond mere pathfinding: https://courses.nucl.ai/. You can also dig into the GDC (Game Developers Conference, the biggest event in gamedev each year) vaults for lectures under the AI tag. Most recent ones are paid only, but some are free and all 3 year+ ones are free: http://www.gdcvault.com/browse/?track_category=1402.

Hopefully some of these will inspire you towards some side-project work that you can bring along to industry interviews.

The route I took was initially becoming a graduate gameplay programmer, and making it clear to my bosses that I wanted to transition into AI as fast as possible. Many smaller studios (which is almost all of them in the UK at the moment) might not have a dedicated AI programmer, so as the “AI guy” gameplay programmer, you can grab the odd task here and there that might look good on a future CV. I used that approach to do animal AI on a zoo expansion to Rollercoaster Tycoon 3, and to prototype pathfinding for thousands of actors with very limited Playstation 2 (!) resources on a rollercoaster game that was so crap I won’t namedrop it. 🙂

That took just over two years, and once I had that I was able to apply to other studios and get jobs on dedicated AI teams. All the while learning about the realities of shipping games, even if I was sometimes working on UI instead of AI. I think that’s a much better use of time than a masters. Sorry [my aforementioned academic friend] Nick!

If you’re interested in the hot topics in the gamedev AI world at the moment (which is needless to say very different from academic AI), check out monte carlo tree search, recurrent neural networks, and AI-assisted tooling. Slightly older tech includes Hierarchical Task Planners, Behaviour Trees and Utility Systems.

Hope this helps. If you have any followup questions, just shout.

What do you think? How did you get into the industry? How was your experience different?

Remote Working at Spry Fox slides

Here are my annotated slides to last week’s GDC talk. The attendance was good and the vibe I got was that people enjoyed it! Last year I (eventually) screencasted my unit testing talk and put it on youtube; I’ll update the blog if I get around to the same with this. It’ll also be on the GDC value soon, and I’ll ask if they’ll make it a free talk.

GDC 2015 – Remote working and automated testing

I’m speaking again at GDC in March. This will be my third GDC in a row, yet I’m no less nervous!

On Wednesday at 11am, I’m speaking for half an hour on remote working at Spry Fox (room 2020 west). I’ll be covering tools and processes, but the things I’m most interested in talking about are the qualities of a good remote developer, and the hacks we use to build a tight supportive team out of people of different continents.

On Thursday I’m building on the success of last year’s unit testing talk to chair a roundtable on automated testing. Anyone interested in any strata – from CI servers to smoke tests to unit tests – should come along. Bring war stories, gotchas and hacks.

If you’ve got ideas you’d like raised during the roundtable, but can’t make it to San Francisco, why not leave a comment below? Here’s some questions to get you thinking:

  • What’s the coolest piece of automated testing tech you’ve seen used?
  • What’s the most dramatic improvement you’ve seen after introducing some automated testing into a process?
  • Are there any kinds of automated testing you find don’t work so well with games?
  • Do you think automated testing is mainstream yet? What more can we do to sell various types of testing to the management?

How to symbolicate iOS crash dumps from Unity Cloud Build games

Disclaimer: this advice is provided without warranty, works on my machine, etc. Use it at your own risk.

We’ve been using Unity Cloud Build at Spry Fox for a soon-to-be-announced project. Since we’re a remote company, it’s great to be able to push and know that both Android and iOS builds will be soon available to everyone in the team with your new feature. It’s saving a lot of time every day.

This morning I pushed our new IAP back-end, which of course involved some low-level stuff on both platforms. Everything worked on local builds, but when I tried the iOS build from Unity Cloud, it crashed on start-up. Getting the callstack of the crash turned into a bit of an odyssey. I thought I’d document it here.

By default, Unity will not generate Xcode projects that produce useful debug info. We can fix this by adjusting the Xcode project after generation, but that’s no good for Unity Cloud Build where we have no control.

So, one time only, you should add this file https://gist.github.com/tenpn/f8da1b7df7352a1d50ff to your project (EDIT: in an editor subfolder; thanks Terry Paton). That will do the post-processing for you and for Unity Cloud Build. (There are conflicting reports as to if this increases build size, but you definitely want it while developing. It might still be worthwhile leaving it in for release, because it will be extremely helpful in chasing down crashes in the wild.)

Wait for Unity Cloud Build to produce a new build containing that change, install it, and repro the crash on your device. Then follow these steps:

  1. Go to the cloud build project for your game, and find the dSYM and ipa downloads for the build you installed. At the moment, you click Install on the build in the history list, then Share, then both Download to Desktop for the ipa and Download dSYM.
  2. Unzip the dSYM somewhere safe. I put it in a new folder named after the Cloud build number we are investigating. The location should be somewhere Spotlight can index.
  3. Rename the ipa extension to zip, and unzip the .app inside to the same folder as the dSYM.
  4. Attach your iOS device via USB to your OSX machine
  5. In Xcode (these steps are for Xcode 6.1.1), go to Windows->Devices
  6. Select your device in the left pane, and click View Device Logs
  7. There should be a crash dump for your game. Locate it by the process name and crash time columns and click on it.
  8. The right window should now have the crash dump with a named callstack. If not, right-click on the crash dump in the left pane and select Re-Symbolicate Log.

Hopefully this gets you enough info to start debugging your app.

Shameless plug: DecisionFlex is a new unity plugin to let your AI make human-like and emergent decisions.

DecisionFlex

DecisionFlex considerations

For a while I’ve been working on an AI plugin for Unity3D called DecisionFlex. It’s a decision-making tool for your games, based on Utility Theory. It’s great for when you have multiple actions to choose between, and lots of factors to take into consideration. For instance:

  • A soldier choosing its next target, based on target range, type and the soldier’s ammo count
  • A Sims-like human choosing to eat, drink, work or exercise based on hunger, thirst, wealth and fitness
  • A bot choosing to prioritise picking up a health pack, based on the bot’s HP and distance to the nearest pack
  • Any time you might find yourself writing highly nested or cascading if-then statements

DecisionFlex editor-based and data-driven, so you can construct new decisions and actions, and tweak considerations, without diving into code. The code to hook your game up to DecisionFlex is minimal and you don’t need to understand complex equations.

DecisionFlex isn’t quite ready to ship yet, but I’m ready to start talking about it. You can find more information and a web demo here:

http://www.tenpn.com/decisionflex.html