How to build Unity3D scripts from the command line

I use emacs for day-to-day programming, including writing unity scripts. For a while I made changes then tabbed over to the unity editor to watch them compile and check for errors. This was, imaginably, tedious. Anyone not coding in MonoDevelop has the same issue, and even the MonoDevelop folks (god have mercy upon their souls) have issues.

It turns out there are multiple ways to compile your unity scripts from the command line, in such a way that most editors can be told to do it and parse the output for errors, greatly speeding up iteration time. I don’t think there’s anything new here, I’m just collating information that seems spread out across the net like horcruxes.

The first way to build your script files is the easiest to get working, but is cumbersome and slow. We’ll just pass a range of cryptic command line arguments to our Unity executable, it’ll launch without an editor front end, compile the scripts, dump any errors to the console, and quit. Cue up some regex and your editor can pull in the errors. Here is the command:

    path/to/unity -batchmode -logFile -projectPath path/to/project/dir -quit

On OSX you can find the unity executable at /Applications/Unity/Unity.app/Contents/MacOS/Unity.

You can read more about unity command line arguments in the unity docs, but let’s go through the options here:

  • -batchmode: stops any front end from loading
  • -logFile: tells unity what to do with any output. Without this parameter it’s all thrown away, which is of no use to our text editors. The docs above will tell you this argument needs a file parameter, but it’s undocumented that if you omit the parameter, all output, including errors, gets spat out to the terminal. That’s just what we want!
  • -projectPath: the path to the root of your project, where you find the sln file and the Assets directory
  • -quit: tells unity to exit straight after compiling.

This works on every platform, and has the nice feature of pulling in new files before compiling, so we don’t need a separate step for that. However it is slow as a milk cart going uphill, because it has to load and boot a lot of the unity runtime before it can do the simple thing of finding mono and asking it to compile a solution file for us.

It is also annoying because it doesn’t work if the unity editor is currently open. Even when in a heavy coding session there’s normally a need to run the game or tweak inspector values at regular intervals, and opening and closing the editor is not going to help you stay in a nice flow. Having said that, this drawback might not matter on a continuous integration server, so it might be the way to go.

So since this is just asking mono to compile for us in a round-about way, how do we cut out the middle man? The mono command-line compilation tool is called mdtool, and it ships with Unity. On OSX you can find it at /Applications/Unity/MonoDevelop.app/Contents/MacOS/mdtool. Here’s the command to give it:

    path/to/mdtool build path/to/project/project.sln

Here we just need to give mdtool the command build, and the path to the sln file. This is super-quick compared to the Unity approach, and works with the editor still open, but unfortunately won’t pick up newly added files. You’ll still have to tab to the editor for those to be picked up. However that’s relatively uncommon, so isn’t too much of a bother.

However when I tried this with the mdtool that ships with unity, I got very strange errors. They used to be relatively compact, but nowadays there are segfaults and all kinds of stuff going on. I did some casual googling over a few weeks, and couldn’t find a solution. But there was a workaround: install the latest Xamarin Studio (a free mono dev environment based on MonoDevelop), and use its mdtool. On OSX that’s at /Applications/Xamarin\ Studio.app/Contents/MacOS/mdtool.

So there you go: with one of these two approaches, you should be able to compile your unity scripts on the command line, find the errors and connect them to your editor of choice. These should all work on windows as well, but if someone can confirm in the comments that would be great.

If you’re interested, my emacs unity helper functions, including some flycheck compilers using both mdtool and unity, can be found on github.

Advertisements

Anatomy of a unit test

In my spare time I maintain a unit testing library built for Unity3D. It’s called UnTest, it’s open-sourced under the MIT license, and you can download it here.

In the Unity3D community forums announcing UnTest, _Shockwave asked for an example of some real-life unit tests written with this framework. UnTest is very xUnit-flavoured, so they follow a standard pattern, but I thought it would be a good excuse to talk about good unit testing practice.

Much of my unit testing approach is from Roy Osherove’s Art of Unit Testing, which is a very readable and practical book on unit testing. It’s aimed at .Net, so highly applicable for Unity3D development. The Art of Unit Testing website also has some recorded conference lectures from Osherove that are also worth watching. If you want to get better at writing unit tests, these are great resources.

The unit test I’m going to dissect is below. It’s a real-life test from a production behaviour tree system. It’s not really important here to understand what a behaviour tree or a selection node is, as much as the patterns and conventions I followed. Good unit tests are readable, maintainable and trustworthy. As we walk through the test, I’ll explain how these qualities apply, and how to maximise them.

    [Test]
    void UpdatePath_OneActionChild_AddsSelAndChildToPath() {

        var actionNode = FakeAction.CreateStub();
        m_selNode.AddChild(actionNode);

        m_selNode.UpdatePath(m_path, NodeResult.Fails, null);

        var selectionThenAction = new ITreeNode[] {
            m_selNode, actionNode };
        Assert.IsEqualSequence(m_path, selectionThenAction);
    }

To increase readability, the first thing to note is the context of the file you can’t see. It’s in a file called SelectionNodeTests.cs, so I instantly know this test applies to the SelectionNode class. There’s only one class in this file, with the same name, so there’s no chance of confusion.

The name of the function follows a consistent convention throughout the codebase: FunctionUndertest_Context_ExpectedResult. There are many naming conventions you could follow, this is the one I do. Context is how we set up the world before running the function. In this case, we’re adding a single action node to the selection node. ExpectedResult is how we want the function to behave; here we want the selection node and the action node to be added to the path.

It’s not important how long the name of this function is, since it’s never called from anywhere. The more readable and informative you can make the function name, the easier it will be to figure out what went wrong when it fails.

The unit test is split into three sections following the AAA pattern: Arrange, Act, Assert.

Arrange, where I set up the selection for testing:

    var actionNode = FakeAction.CreateStub();
    m_selNode.AddChild(actionNode);

All I need to do is create an action node and add it to the selection node.

Act, where I execute the function we want to test:

    m_selNode.UpdatePath(m_path, NodeResult.Fails, null);

Assert, where I check to see that end condition we expected has been fulfilled:

    var selectionThenAction = new ITreeNode[] {
        m_selNode, actionNode };
    Assert.IsEqualSequence(m_path, selectionThenAction);

Here I construct what I expect the path to be, and assert that it matches the actual path using an Assert library provided by UnTest.

By following this layout, all tests can be easily scanned. When time comes to fix or update a test, developers can dive in quickly.

You’ll notice I could compact the Assert section into one line:

    Assert.IsEqualSequence(m_path, new ITreeNode[] { m_selNode, actionNode });

The reason I keep it separate is to avoid “magic numbers”. It’s too easy to write code like, Assert.IsEqual(result, 5). The writer may know what this 5 means, but it would be much better for future readers to put it in a named variable and write Assert.IsEqual(result, hypotenuseLength).

Now this test is as readable as possible, how did I make it maintainable too? You’ll notice that by improving readability I’ve gone some way to also helping maintainability, as something that’s easier to read is also easier to understand, and therefore is easier to maintain. But there are other things I do as well.

Check out the first line:

    var actionNode = FakeAction.CreateStub();

I need an action to put into the selection node. I could use an existing IAction concrete class, but then any bugs in that concrete class might cause this test to fail. I’ll cover more why that’s bad later, but just pretend it sucks.

I could derive a new class from IAction, which I could keep simple enough to avoid bugs, but then I’d have to maintain that whenever the Action class interface changed. It’s much easier to use a “mocking framework” to do most of the hard work for me.

A mocking framework is a library that can be used to construct a new type at runtime that derives from Action and just does the right thing (among many other things). Then any changes are picked up for me automatically, and I have less code to maintain. If that sounds like magic, that’s because it is.

There’s a mocking framework behind that FakeAction.CreateStub() call, but since it’s such a common use case in this test suite I’ve wrapped it up in a helper function.

Any mocking framework that works with mono will work with Unity3D. I use Moq. The latest version is here. I’ve mirrored this in a unitypackage here for easy importing to Unity.

To further isolate myself from changes, I’m constructing the member variables m_selNode and m_path in a setup function (not shown). This function is run automatically before every test, and makes new SelectionNode and Path objects. This is not only handy, because they’re used in every test in the class, but also isolates the unit tests from changes to the constructor signatures. Other commonly-used functions can also be hidden behind helper functions, but it’s best not to hide your function-under-test for readability reasons.

The final thing I need to do is make the test “trustworthy”.

By going through the maintainable and readable steps, I’ve made sure this test depends on the minimum amount of game code. When this test fails, hopefully it will only be because the function under test, UpdatePath(), had an error.

The more game code you depend on, the closer your test slips along the spectrum from unit to integration test. Integration tests check how systems connect together, rather than single assumptions. They have their place in a testing plan, but here I’m trying to write a unit test. A great rule of thumb is that a single line of buggy code should cause failures in the minimum of unit tests, and ideally only one. If lots fail, that’s because the code isn’t isolated properly and you’ve ended up with some integration tests.

Some of my early unit tests, from F1 2011, created a whole world for the AI to move in and recorded the results, rather than mocking surrounding code like we have here. The end result was that a single bug in the world code could cause many many tests to fail. That makes it hard to track down the root cause of the bug, and meant I had probably written integration tests instead of unit tests.

When this test does fail, it will be deterministic. There’s no dependency here on databases, network services, or random number generators. There’s nothing worse than unit tests that fail once in a blue moon, because they erode developer trust in the test suite. That’s how you end up with swathes of tests commented out, and wasted engineering time.

Now you understand why I’ve written this real-life unit test in this way, and why it’s important your unit tests are readable, maintainable and trustworthy. Like any type of programming, writing good unit tests takes practice and perseverance. They’re truly the foundation of your project, giving you the freedom to restructure at will and the confidence that your game code is high quality. But like any foundation, if they’re not well engineered the whole edifice comes rapidly crumbling down. Take the time to follow up with the resources I linked above, and you will hopefully avoid that situation.

Unity 4.2 features write-up

EDIT: Unity 4.2 is now available for download, and you can read a more comprehensive feature list on the official blog.

A quick google didn’t find anyone who had written this up beyond the headlines, so here’s the gritty details on what’s in 4.2, mainly taken from the Unite Nordic 2013 Keynote, here: https://www.youtube.com/watch?feature=player_detailpage&v=QY-e2vdPdqE#t=1246s

All of these are in all versions of 4.2, including the free version:

Edit: Seems I misinterpreted part of Lucas’s speech. All these features are free, they will appear in whatever version of unity is most suitable. Eg shadows are part of Unity3D Pro, so those improvements are available to all Pro users for free. Here’s the list:

  • Windows 8 Store export, to RT as well, so they work on RT tablets
  • Mechanim avatar creation API: no longer need skeleton at time of build, can be applied to new skeleton at run time. Helps with player-created avatars.
  • Anti-aliased render textures: useful for occulus rift because VR headsets use a render target per eye, so now they can be antialiased.
  • Stencil buffer access
  • Shadow batching: reduce draw calls of shadow-heavy scenes. ~50% reduction in shadow rendering in unity-internal shadow-heavy scenes.
  • Improved shadows on OSX!
  • Headless linux player: make linux servers from your game code. That’s pretty cool.
  • Fixed problems where lots of frequent garbage collection could interrupt audio.
  • Rigid bodies can capture shuriken particle collisions, so you can do cool things like receive forces from particles
  • You can cancel builds! And asset imports! And switching platforms!
  • Presets for colours, curves and gradients in the editor. Handy for reusing data across components.
  • Memory snapshot view now tells you the reasons an asset is loaded. The example shown is a texture loaded because of a material because of a bunch of meshes.
  • An import option to reduce white pixels in alpha’d fringes of transparent textures

And then there’s the headlines about the mobile add-ons becoming free, which everyone has heard by now:

http://blogs.unity3d.com/2013/05/21/putting-the-power-of-unity-in-the-hands-of-every-mobile-developer/

Of those new features, I’m excited by the headless player support. That’s going to be great for client-server games that want to run on AWS or something. The presets also sound interesting – I’m a huge fan of animation curves, and anything that increases their functionality is great by me. And I could have used the more detailed memory snapshot tool while optimising Sonic Dash.

So looking forward to the release of 4.2, and I hope you are too.

Reducing Memory Usage in Unity, C# and .NET/Mono

Unity on iOS uses an early version of Mono’s heap manager. This manager doesn’t do packing, so if you fragment your heap, it’ll just grab a new one for you. I am under the impression the Unity boffins are working on a new heap manager to get around this problem, but for now a game with no memory leaks can end up consuming ever-increasing amounts of memory. C# is a fun language that allows you to quickly write powerful code without sacrificing readability. However the downside to this is that writing natural C# code produces a lot of garbage collection. The only way around this is to eliminate or reduce your heap allocations. I’ve compiled a handy list ways to do this without reducing functionality. The end effect is that your C# code looks much more like C++, and so you lose some of that C# power, but such is life. As an added bonus, heap allocs are inherently more CPU-intensive than stack allocs, so you’ll probably save some frame time as well. To target your efforts, the Unity profiler can help you functions that make a lot of allocations. It’s not a lot of info, but it’s there. Open the profiler and run the game,  select the CPU profiler, and tap the GC Alloc column to sort by the worst offenders. Apply these guidelines to those functions first.

  • Avoid using foreach(). It calls GetEnumerator() on your list type, which will allocate an enumerator on the heap just to throw it away. You’ll have to use the more verbose C++-style for(;;) syntax. EDIT: Unless you’re foreach-ing over an array. That causes no allocations. I guess that’s a special case that becomes syntactic sugar for a for(…){}? Thanks to Ian Horswill for pointing that out.
  • Avoid strings. Strings are immutable in .NET and allocated on the heap. You can’t manipulate them in-place like C. For UI, use StringBuilders to build up strings in a memory-efficient manner, delaying the conversion to string until as late as possible. You can use them as keys because literals should point to the same instance in memory, but don’t manipulate them too much.
  • Use structs. Struct types in mono are allocated on the stack, so if you have a utility class that won’t leave scope, make it a struct. Remember structs are passed by value, so you will have to prefix a parameter with ref to avoid any copy costs.
  • Replace scope-bound fixed-size arrays with structs. If you have a fixed-size array that doesn’t leave scope, consider either replacing it with a member array that you can reuse, or create a struct with fields that mirror it. I replaced a Vector3[4] that was allocated every time you called our spline class with a ControlList struct that had four fields. I then added an this[] property for index-based access. This saved a ton of allocations because it was such a high-frequency function.
  • Favour populating lists passed as ref parameters over returning new lists. This sounds like you’re not saving anything – you still need to heap-alloc the list you pass in, right? – but it allows us to, where necessary, make the next particularly ugly optimisation:
  • Consider storing the scope-local storage of a high-frequency function as a member variable. If your function needs a big list every time it’s called, make that list a member variable so the storage persists between frames. Calling .Clear() on a C# list won’t delete the buffer space so you have no/less allocs to do next frame. It’s ugly and makes the code less readable so needs a good comment, but can make a big difference on heavy lifters.
  • Avoid IEnumerable extension methods. It goes without saying that most Linq IEnumerable extension methods, as handy as they are, will create new allocations. However I was surprised that .Any(), called on an IList<>, which I expected to just be a virtual function to Count > 0, triggered an allocation. It’s the same for other funcs that should be trivial on an IList, like First() and Last(). If anyone can illuminate me on why this is I’d appreciate it. Because of this, and the foreach() restriction, I’d go as far as to say avoid the IEnumerable<> abstraction in your interfaces, and use IList<> instead.
  • Minimise use of function pointers. Putting a class method in a delegate or a Func<> causes it to be boxed, which triggers an allocation. I can’t find any way to store a link to a method without boxing. I’ve left most of my function pointers in there because they’re a massive boon to decoupling, and I’ll just have to live with the allocs, but I’ve removed some.
  • Beware cloned materials. If you get the material property of any renderer, the material will be cloned even if you don’t set anything on it. This material isn’t GC’d, and is only cleared up when you either change levels or call Resources.UnloadUnusedAssets(). Use myRenderer.sharedMaterial if you know you don’t want to adjust the material.
  • Don’t use raw structs as keys in dictionaries unless... If you use a Dictionary<K,V> type, and your key value is a struct, fetching a value from the dictionary using TryGetValue() or the index accessor can, somehow, cause an allocation. To avoid this, implement IEquatable<K> for your struct. I guess the dictionary is creating some default equality comparer every time. Thanks to Ryan Williams for finding this one.
  • 2014/5/8 Don’t overuse Enum.GetValues() or myEnumValue.ToString()Enum.GetValues(typeof(MyEnum)) allocates an array with every call. Enum.GetNames() does something similar. If you’re like me, you’re probably using these heavily, along with .ToString() on enum variables, to do UI, as well as is other places. You can cache both these arrays easily, allowing you to do a for loop over enum values as well as cachedEnumNames[(int)myEnumValue]. This doesn’t work if your enum values are manually set, eg flags.

Feel free to add more in the comments.

Shamless plug: Easily add human-like and emergent decision making to your Unity3d game with DecisionFlex