Search code examples
nlplangchain

How to pass context along with chat_history and question in template in langchain?


This is how I am trying to get answers using ConversationalRetrievalChain along with RAG. But facing ValueError: Missing some input keys: {'context'}

        llm = HuggingFacePipeline(pipeline=generate_text)
        self.vectorstore = Pinecone(self.index, self.embed_model.embed_query, "text")
        self.vectorstore.similarity_search(topic, k=6)

        PROMPT = None
        if self.template is not None:
            PROMPT = PromptTemplate(
                template=self.template, input_variables=["chat_history", "context", "question"]
            )

        chat_history = ConversationBufferMemory(output_key='answer', context_key='context',
        memory_key='chat_history', return_messages=True)

        self.rag_pipeline = ConversationalRetrievalChain.from_llm(
            llm=llm,
            chain_type="stuff",
            retriever=self.vectorstore.as_retriever(),
            condense_question_prompt=PROMPT,
            verbose=False,
            return_source_documents=True,
            memory=chat_history,
            get_chat_history=lambda h : h,
        )

This is my Template

        You help everyone by answering questions, and improve your answers from previous answer in History.
        Don't try to make up an answer, if you don't know just say that you don't know.
        Answer in the same language the question was asked.
        Answer in a way that is easy to understand.
        Do not say "Based on the information you provided, ..." or "I think the answer is...". Just answer the question directly in detail.
        Use only the following pieces of context to answer the question at the end.

        History: {chat_history}

        Context: {context}

        Question: {question}
        Answer:

Its working only when I am using rag_pipeline for the first time, for the second prompt it is giving the error.


Solution

  • To pass context along with chat_history and question in the template for your code, you can modify the template as follows:

    template = """
    You help everyone by answering questions, and improve your answers from previous answers in History.
    Don't try to make up an answer, if you don't know, just say that you don't know.
    Answer in the same language the question was asked.
    Answer in a way that is easy to understand.
    Do not say "Based on the information you provided, ..." or "I think the answer is...". Just answer the question directly in detail.
    
    History: {chat_history}
    
    Context: {context}
    
    Question: {question}
    Answer: 
    """
    

    In this modified template, I've added placeholders for chat_history, context, and question within the template string. When you use this template with your code, you will need to provide values for these placeholders when generating a response. You can do this by passing a dictionary to the PROMPT with keys "chat_history", "context", and "question" and their corresponding values. For example:

    PROMPT = PromptTemplate(
        template=template,
        input_variables=["chat_history", "context", "question"]
    )
    

    Then, when you generate a response using the template, make sure to provide values for these placeholders in the dictionary you pass to the PROMPT.

    example:
    
    
    response = self.rag_pipeline("Can you tell me about the history of the 
    Eiffel Tower?", context="Paris", question="Tell me about the Eiffel Tower's 
    history")
    

    In this example, "Can you tell me about the history of the Eiffel Tower?" is the input question, "Paris" is the context, and "Tell me about the Eiffel Tower's history" is the question you want to answer. The template will replace the placeholders with these values when generating the response.