Search code examples
ms-autogenlm-studio

Autogen with LM Studio running llama3


Very new to Autogen. I have a model meta-llama-3.1-8b-instruct running on LMStudio at http://127.0.0.1:1234/v1. I am trying to run the example code provided here at Autogen Getting Started. The difference is , I have changed the config_list to work with LMStudio

import autogen  
import os
import dotenv
from autogen import AssistantAgent, UserProxyAgent

config_list = [{
    "api_type": "open_ai",
    "api_base": "http://127.0.0.1:1234/v1",
    "api_key": "NULL"
}]

llm_config = {
    "request_timeout": 600,
    "seed": 42,
    "config_list": config_list,
    "temperature": 0
}

assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)

# Start the chat
user_proxy.initiate_chat(
    assistant,
    message="Tell me a joke about an egg and chicken.",
)

However, I get the following error. Am I missing something obvious ?

TypeError: Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given


Solution

  • I was just working on this also and managed to get it running with llama-3.2-1b running locally on LM Studio.

    • In your config_list change 'api_base to 'base_url'
    • In your llm_config add the model name 'model': 'llama-3.2-8b-instruct'
    from autogen import AssistantAgent, UserProxyAgent
    
    config_list = [{
        "api_type": "open_ai",
        "base_url": "http://localhost:1234/v1",
        "api_key": "NULL"
    }]
    
    llm_config = {
        "seed": 42,
        "config_list": config_list,
        "temperature": 0,
        "model": "llama-3.2-1b-instruct"
    }
    
    assistant = AssistantAgent("assistant", llm_config=llm_config)
    user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
    
    # Start the chat
    user_proxy.initiate_chat(
        assistant,
        message="Tell me a joke about an egg and chicken.",
    )
    

    Example output:

    user_proxy (to assistant):
    
    Tell me a joke about an egg and chicken.
    
    --------------------------------------------------------------------------------
    assistant (to user_proxy):
    
    Here's one:
    
    Why did the egg go to therapy after meeting the chicken?
    
    Because it was cracking under the pressure!
    
    I hope that cracked you up!
    
    --------------------------------------------------------------------------------