pythonopenai-apichatgpt-api

OpenAI ChatGPT API error: "InvalidRequestError: Unrecognized request argument supplied: messages"


I am currently trying to use OpenAI's most recent model: gpt-3.5-turbo. I am following a very basic tutorial.

I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this:

prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']

I defined a function to do so:

import openai

# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'

def get_response(prompts: list, model = "gpt-3.5-turbo"):
  responses = []

  
  restart_sequence = "\n"

  for item in prompts:

      response = openai.Completion.create(
      model=model,
      messages=[{"role": "user", "content": prompt}],
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )

      responses.append(response['choices'][0]['message']['content'])

  return responses

However, when I call responses = get_response(prompts=prompts[0:3]) I get the following error:

InvalidRequestError: Unrecognized request argument supplied: messages

Any suggestions?

Replacing the messages argument with prompt leads to the following error:

InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'

Solution

  • Problem

    You used the wrong method name to get a completion. When using the OpenAI SDK (Python or Node.js), you need to use the right method name.

    Which method name is the right one? It depends on the OpenAI model you want to use.

    Solution

    The tables below will help you figure out which method name is the right one for a given OpenAI model.

    First, find in the table below which API endpoint is compatible with the OpenAI model you want to use.

    API endpoint Model group Model name
    /v1/chat/completions • GPT-4
    • GPT-3.5
    gpt-4 and dated model releases
    gpt-4-32k and dated model releases
    gpt-4-1106-preview
    gpt-4-vision-preview
    gpt-3.5-turbo and dated model releases
    gpt-3.5-turbo-16k and dated model releases
    • fine-tuned versions of gpt-3.5-turbo
    /v1/completions (Legacy) • GPT-3.5
    • GPT base
    gpt-3.5-turbo-instruct
    babbage-002
    davinci-002
    /v1/assistants All models except gpt-3.5-turbo-0301 supported.
    Retrieval tool requires gpt-4-1106-preview or gpt-3.5-turbo-1106.
    /v1/audio/transcriptions Whisper whisper-1
    /v1/audio/translations Whisper whisper-1
    /v1/audio/speech TTS tts-1
    tts-1-hd
    /v1/fine_tuning/jobs • GPT-3.5
    • GPT base
    gpt-3.5-turbo
    babbage-002
    davinci-002
    /v1/embeddings Embeddings text-embedding-ada-002
    /v1/moderations Moderations text-moderation-stable
    text-moderation-latest

    Second, find in the table below which method name you need to use.

    Note: Pay attention, because you have to use the method name that is compatible with your OpenAI SDK version.

    API endpoint Python SDK <v1
    method name
    Python SDK v1
    method name
    Node.js SDK v3
    method name
    Node.js SDK v4
    method name
    /v1/chat/completions openai.ChatCompletion.create openai.chat.completions.create openai.createChatCompletion openai.chat.completions.create
    /v1/completions (Legacy) openai.Completion.create openai.completions.create openai.createCompletion openai.completions.create
    /v1/assistants / openai.beta.assistants.create / openai.beta.assistants.create
    /v1/audio/transcriptions openai.Audio.transcribe openai.audio.transcriptions.create openai.createTranscription openai.audio.transcriptions.create
    /v1/audio/translations openai.Audio.translate openai.audio.translations.create openai.createTranslation openai.audio.translations.create
    /v1/audio/speech / openai.audio.speech.create / openai.audio.speech.create
    /v1/fine_tuning/jobs / openai.fine_tuning.jobs.create / openai.fineTuning.jobs.create
    /v1/embeddings openai.Embedding.create openai.embeddings.create openai.createEmbedding openai.embeddings.create
    /v1/moderations openai.Moderation.create openai.moderations.create openai.createModeration openai.moderations.create

    Python SDK v1 working example for the gpt-3.5-turbo model (i.e., Chat Completions API)

    If you run test.py, the OpenAI API will return the following completion:

    Hello there! How can I assist you today?

    test.py

    import os
    from openai import OpenAI
    client = OpenAI()
    OpenAI.api_key = os.getenv('OPENAI_API_KEY')
    
    completion = client.chat.completions.create(
      model = 'gpt-3.5-turbo',
      messages = [
        {'role': 'user', 'content': 'Hello!'}
      ],
      temperature = 0  
    )
    
    print(completion.choices[0].message.content)
    

    Node.js SDK v4 working example for the gpt-3.5-turbo model (i.e., Chat Completions API)

    If you run test.js, the OpenAI API will return the following completion:

    Hello there! How can I assist you today?

    test.js

    const OpenAI = require("openai");
    const client = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });
        
    async function main() {
      const completion = await client.chat.completions.create({
        model: 'gpt-3.5-turbo',
        messages: [
          { role: 'user', content: 'Hello!' }
        ],
        temperature: 0,
      });
    
      console.log(completion.choices[0].message.content);
    }
    
    main();