Search code examples
actions-on-googlegoogle-homeapi-ai

Is API.AI the native way to build conversational skills for Google Assistant?


I have developed a conversational skill using API.AI and deployed to Google Home but API.AI's support seems limited and I am unable to do certain things like playing an audio file. The question I have is whether it's better to stick with API.AI or switch to Actions on Google for the long term.


Solution

  • Google has said that API.AI is the recommended way to build an agent for 'actions on google' for those who don't need/want to do their own NLU. They seem to expect that most developers will use API.AI because it does some of the work for you, with the NLU being the prime example, cf. Alexa where the developer is expected to specify all the different utterence variations for an intent (well, almost all - it will do some minor interpretation for you).

    On the other hand, keep in mind that API.AI was created/designed before 'actions on google' existed and before they were purchased by Google - it was designed to be a generic bot creation service. So, where you gain something in creating a single bot that can fulfill many different services and having it do some of the messy work for you, you will certainly lose something compared to the power and control you have when writing to the API of one specific service - something more then just the NLU IMO, though I can't speak to playing an audio file specifically.

    So, if you plan to just target the one service (and an audio bot is not relevant to most of the other services supported by API.AI) and you are finding the API.AI interface to be limiting then you should certainly consider writing your service with the 'actions on google' sdk.