I am following this tutorial from langchain official documentation here were I try to track the number of tokens while usage. However, I wanted to use gpt-3.5-turbo instead of text-davinci-003 so I changed the LLM class used from OpenAI to ChatOpenAI but this a Value Error of unsupported message type
Here is the code snippet:
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
os.environ['OPENAI_API_KEY'] = "OPENAI-API-KEY"
llm = ChatOpenAI(
model_name='gpt-3.5-turbo-16k',
temperature=0.0
)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)
Getting this error:
ValueError: Got unsupported message type: T
Why changing the class from OpenAI to ChatOpenAI gives this error? How to solve?
You are getting that error because you are halfway through. If you type the question/query/prompt directly into a ChatOpenAI
instance - llm
in your case - you'll get an error.
Try this, based on LangChain's example:
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
from langchain.schema import AIMessage, HumanMessage, SystemMessage
os.environ['OPENAI_API_KEY'] = "OPENAI-API-KEY"
chat = ChatOpenAI(
model_name='gpt-3.5-turbo', #Opinion: no need to use the -16k model
temperature=0.0
)
with get_openai_callback() as cb:
result = chat([HumanMessage(content="Tell me a joke.")]
print(f"\n Total Tokens: {cb.total_tokens}")
print(f" Prompt Tokens: {cb.prompt_tokens}")
print(f" Completion Tokens: {cb.completion_tokens}")
print(f" Total Cost (USD): ${round(cb.total_cost, 2)}")
Look at LangChain's Python documentation for ChatOpenAI
reference.
PS: With temperature=0.0
, you'll likely get a quite dull joke.