Search code examples
swiftneural-networkgame-developmentcoreml

CoreML for Game AI with multiple inputs and multiple outputs


I'm playing around making a 2D game, and I'd like an AI enemy to chase/attack/avoid the main character under certain circumstances.

I've been thinking what would the AI need to do:

  1. Process what it can see
  2. Move up, down, left, right
  3. Attack

Given the prominence of CoreML, could I build a *.mlmodel that could take in for example, a picture of the scene, and some other inputs, and output five messages, like, up, down, left, right, attack

The way I see it working would be, for every frame of the game:

  1. get the inputs
  2. Send to CoreML for processing
  3. CoreML returns all 5 outputs
  4. Enemy AI, actions those outputs.

Is this something CoreML could handle?


Solution

  • While it is possible to train a machine learning model to perform these actions given the right circumstances, I believe that GameplayKit is closer to what you're looking for.

    Specifically, the chase / attack / avoid actions that you describe are similar to actions in the "DemoBots" sample code project from a few years ago. That should be a good place to start. The Deeper into GameplayKit with DemoBots WWDC video might also be a good resource.