I have built a very simple application using Azure Open AI, Langchain and Streamlit. Following is my code:
from dotenv import load_dotenv,find_dotenv
load_dotenv(find_dotenv())
import streamlit as st
from langchain.llms import AzureOpenAI
from langchain.prompts import PromptTemplate
LLM = AzureOpenAI(max_tokens=1500, deployment_name="gpt-35-turbo-16k", model="gpt-35-turbo-16k")
prompt_template = """
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Question: {question}
"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["question"])
st.title('Experiment AzureOpenAI :)')
user_question = st.text_input('Your query here please')
if user_question:
prompt = prompt_template.format(question = user_question)
response = LLM(prompt)
st.write(response)
st.write('done')
When I run the above code, I am getting the following error back:
InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo-16k. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.
However my code runs perfectly fine if I change the model from gpt-35-turbo-16k
to gpt-35-turbo
. So the following code works:
LLM = AzureOpenAI(max_tokens=1500, deployment_name="gpt-35-turbo", model="gpt-35-turbo")
I am wondering why is this error occurring?
From this link
, only difference I could see is that gpt-35-turbo-16k
supports up to 16k input tokens whereas gpt-35-turbo
supports up to 4k input tokens.
Based on documentation, the plausible reason is: the version (0613) only supports the Chat Completions API.
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35-models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API.