Search code examples
langchainazure-openai

Differences between openai.AzureOpenAI and langchain_openai.AzureOpenAI


This code snippet successfully instantiates a usable openai.AzureOpenAI class and I want to adapt it to use the langchain.AzureOpenAI interface. Using it as-is returns a 404 error. What changes do I need to make?

#from openai import AzureOpenAI
from langchain_openai import AzureOpenAI
llm = AzureOpenAI(
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
    api_key=os.getenv("AZURE_OPENAI_KEY"),
    api_version="2024-02-15-preview"
)
# openai use case works: 
# llm.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Tell me a joke"}])

# langchain use case fails:
llm.invoke("Tell me a joke")

llm.invoke returns openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

My .env file with appropriate substitutions:

AZURE_OPENAI_ENDPOINT="https://<endpoint-name>.openai.azure.com/"
AZURE_OPENAI_API_KEY="<api-key>"

Solution

  • Based on Gaurav Mantri's comment, the solution was to add azure_deployment= to the langchain constructor call, as a replacement for the model="gpt-35-turbo" argument in the openai chat.completions.create call:

    
    llm = AzureOpenAI(
        azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
        api_key=os.getenv("AZURE_OPENAI_KEY"),
        api_version="2024-02-15-preview", 
        azure_deployment="gpt-35-turbo"
    )