Search code examples
agentopenai-apilangchainlarge-language-model

In LangChain, how to save the verbose output to a variable?


I tried executing a langchain agent. I want to save the output from verbose into a variable, but all I can access from the agent.run is only the final answer.

How can I save the verbose output to a variable so that I can use later?

My code:

import json
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain.agents import Tool
from langchain.utilities import PythonREPL

llm = OpenAI(temperature=0.1)

## Define Tools
python_repl = PythonREPL()

tools = load_tools(["python_repl", "llm-math"], llm=llm)

agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

response = agent.run("What is 3^2. Use calculator to solve.")

I tried accessing the response from the agent, but it's only the final answer instead of the verbose output.

printing response gives only 9. But I would like the verbose process like:

> Entering new AgentExecutor chain...
 I need to use the calculator to solve this.
Action: Calculator
Action Input: 3^2
Observation: Answer: 9
Thought: I now know the final answer.
Final Answer: 9

Solution

  • I don't find any API to save verbose output as a variable.

    However, I think an alternative solution to the question can be achieved by access intermediate steps in this link.

    That is set return_intermediate_steps=True,

    agent = initialize_agent(
      tools, 
      llm,
      agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
      verbose=True,
      return_intermediate_steps=True
    )
    

    and use response = agent({"input":"What is 3^2. Use calculator to solve"}) instead of agent.run.

    Finally, you can access the intermediate steps in response["intermediate_steps"]

    Hope this will help.