Search code examples
pythonopenai-apilangchain

LangChain RAG - ChatOpenAI doesn't make a sentence when replying


I am building a very simple rag application using Langchain. The problem I'm having is that when I use ChatOpenAI and ask a question. The model doesn't make any sentences when it answers, it doesn't behave like a "chatbot" unlike llama2 for example (see images below). When I switch from ChatOpenAI to llama2, I don't touch anything in my code except to comment on the model.
My data is based on openfoodfacts, which is why I ask for specific ingredients in the question.
What's the problem and what can I do to get the same result as llama2 using ChatOpenAI ?

ChatOpenAI :
enter image description here

Llama2:
enter image description here

Code :

from fastapi import FastAPI
from langchain.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langserve import add_routes
from langchain_community.llms import Ollama
from langchain.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain.embeddings import OpenAIEmbeddings
import os

os.environ["OPENAI_API_KEY"] = "SECRET"

# model = Ollama(model="llama2")
model = ChatOpenAI(temperature=0.1)

import pandas as pd
products = pd.read_csv('./data/products.csv')
vectorstore = FAISS.from_texts(
    products['text'], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()


app = FastAPI(
    title="LangChain Server",
    version="1.0",
    description="Spin up a simple api server using Langchain's Runnable interfaces",
)

ANSWER_TEMPLATE = """Answer the question based on the following context:
{context}

Question: {question}
"""

prompt = ChatPromptTemplate.from_template(ANSWER_TEMPLATE)

chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

# Adds routes to the app for using the retriever under:
# /invoke
# /batch
# /stream
add_routes(app, chain)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)

Solution

  • Changing the temperature to 0.7 and using the default template rlm/rag-prompt resolved my issue