The final project for Artificial Opponents is complete, and the final test/battle was... exciting, to say the least. My AI was relatively successful; it was not the best, nor the worst, although I ran into bugs during the test that never once appeared during any of my testing. We ran into other unexpected issues with almost everyone's final submission during the final test, as has often been the case in this class.
The architecture I used was, at a very basic level, inspired by a talk I saw at GDC '17. The talk was an AI talk about squads and units and sharing data between them. With this as an inspiration, I used the following architecture: at the core, there is a global TankManager class that handles the purchasing of units and handling the store; the TankManager initializes a series of TankBrains, one per unit, that then mostly handles its own instructions. The TankBrains post their target choices and paths to a blackboard in the TankManager, that the other TankBrains then use when considering their targets and their paths.
This system overall worked very well. The notifications from the base game, including unit deaths and wall collisions and more, were all pumped through an event handler in the TankManager that forwarded it on to the relevant TankBrain(s). This system allowed a level of flexibility and separation of concerns that worked incredibly well in all of my personal tests. When I tested it on my own against any of the other DLLs available, most of the time it steamrolled the enemies and at worst it was more or less a tie.
The shop algorithms also worked very well. Every different part had different ways to determine their value, and the TankManager determined the combination of parts with the highest value (determined through things like damage per second by cost for the turrets, speed by cost for the treads, and health by cost for the body) then balanced the value based on the number of credits available. In general, it would purchase multiple tanks with some of the best parts but not all, in a way that worked very well.
Unfortunately, none of this came through in the final test. During this "battle", we ran into problems with everyone's DLL that no one had seen before. The values in the store were completely reworked and some people's pathfinding was so slow that it essentially broke the game if their AI chose to buy multiple tanks.
Personally, my AI ran into an issue where it would throw an exception at the end of a match regardless of who had won, resulting in the stats not being saved. Because of this, I wasn't included in many tests and so I have no idea how mine actually performed. From what I saw, it did pretty much alright. I never once saw this crash before the final test, and I still don't know what caused it. I'm pretty sure every scenario that happened in the final test, I saw happen in my tests without a crash.
Even though mine prevented stats from being collected because it would crash at the very end, it still fared better than some of the others because at least it was able to actually work during the round. Some of my classmates had issues they had never seen before where their tanks would get stuck in place and no longer function at all, or people they had beaten consistently before this final battle they would no longer be able to beat.
The Ugly (The Project As a Whole)
All in all, the Tank Battle was probably the worst of the three projects for this class. It felt like the project kept changing while we were supposed to be working on it, and a lot of people made assumptions that certain values wouldn't change and when they did, their entire system fell apart. At face value, this sounds like an issue with those individuals but to me it seems like the specifications and rules of the project were not communicated to us very well and, because things kept changing, it felt impossible to keep up.
I still have no idea why my AI was crashing, because when I test my DLL on my own with the most recent build of the game in the clone of the repo I have, it does not throw any exceptions. It is very frustrating to me that something changed in the "final battle" environment that caused this, and I didn't have any way to test or investigate what was causing it. The structure of the project as a whole didn't lend itself well to in-depth debugging; it always felt, while we had the source code, huge chunks of the project were "hidden away" from us. Overall, I like Dean's idea to move the project into Unity next semester; still using DLLs, but reworking how they get called and how the architecture of the game is set up into something we are more familiar with.
I love the concept of the Artificial Opponents class, but overall I feel like it didn't go as well as it could've this semester. Part of that was most of us in the class cared much more about Capstone, and that's where most of our resources went instead of into our AI, and part of it was that the format of the projects, with the strange DLL system and the sometimes buggy games. Dean's plan moving forward was to take a year off from the course and revisit it the following fall which I think is a great idea. A competitive, student-versus-student AI class has the potential to be a lot of fun, and with a better format it has a lot of potential going forward.