Search code examples
pythonpython-3.xdiscorddiscord.pyopenai-api

Discord.py v2.0: Slash Command with Cogs integrated with OpenAI API


I have been making a Discord bot using discord.py v2.0 and OpenAI API to make a AI Bot for my personal server. So firstly I will share my code.

This the main.py file:

import os
import asyncio

import discord
from discord.ext import commands

intents = discord.Intents.all()
ceriumAI = commands.Bot(
    command_prefix="<>",
    intents=intents
)

@ceriumAI.event
async def on_ready():
    print("CeriumAI is ready and online!")

@ceriumAI.command()
async def sync(ctx):
    synced = await ceriumAI.tree.sync()
    print(f"Synced {len(synced)} command(s).")

async def loadCogs():
    for filename in os.listdir("./Cogs"):
        if filename.endswith(".py"):
            await ceriumAI.load_extension(f"Cogs.{filename[:-3]}")
            print(f"Loaded the cog: {filename[:-3]}")

async def main():
    await loadCogs()
    await ceriumAI.start(os.getenv("TOKEN"))

asyncio.run(main())

This is the other file/Cog, askai.py:

import openai
import discord
from discord import app_commands
from discord.ext import commands

class Ask_AI(commands.Cog):
    def __init__(self, ceriumAI):
        self.ceriumAI = ceriumAI
        self.openai.api_key = "MY-API-KEY"
        self.messages = [{"role": "system","content":"You are a intelligent assistant."}]
    
    @app_commands.command(name="askai")
    async def ask_ai(self, interaction: discord.Interaction, query: str):
        while True:
            message = query
            if message:
                self.messages.append(
                    {"role": "user", "content": message}
                )
                chat = openai.ChatCompletion.create(
                    model="gpt-3.5-turbo", messages=self.messages
                )
            reply = chat.choices[0].message.content
            await interaction.response.send_message(f"{reply}")
            self.messages.append({"role": "assistant", "content": reply})
                

async def setup(bot):
    await bot.add_cog(Ask_AI(bot))

So the problem: Basically, the slash command works. The weird part is that when I use the command and add queries like "hi", "hey", etc. it responds quickly to the user. But when I add a query like "What is your name?", etc. or just queries in which chatgpt takes time to respond, the bot shows "the application did not respond". My guess is that there is a response time limit of a bot in a slash command which when expires gives the message "application did not respond". I would like to know if such a thing exists and if yes, how to extend it for the AI to respond by taking its own time and get the longer reply. If no, then a bypass to this problem. Thanks :)


Solution

  • You are correct.

    There is a time limit of 3 seconds for the interaction response.

    In order to avoid the interaction from timing out and returning the the application did not respond message, you can use the defer() method included in the response class.

    When deferring, you get up to 15 minutes to respond instead of the normal 3 seconds.

    Also, please note that because deferring counts as a response, you can't use the typical response.send_message method. Instead you can use the followup method, also included in the response class.

    await interaction.response.defer()
    [...]
    await interaction.followup.send(...)
    

    Docs on deferring

    Docs on followup