Search code examples
google-cloud-vertex-ailangchaingoogle-ai-platformpy-langchain

How to supply a pre trained custom model to LangChain?


I am trying to initialise a langchain with a pre-trained custom model of my own rather than use one of Google's base models. When I run the code with text-bison, it works as expected but when I run it with my own model, I get the following exception:

_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    status = StatusCode.INVALID_ARGUMENT
    details = "Request contains an invalid argument."
    debug_error_string = "UNKNOWN:Error received from peer ipv4:x.x.x.x:443 {grpc_message:"Request contains an invalid argument.", grpc_status:3, created_time:"2023-09-07T06:14:11.519783678+00:00"}"

My custom model is supervised trained on text-bison. Here is my code:

#llm = VertexAI(model_name="text-bison@001", max_output_tokens=1024)
llm = VertexAI(model_name="projects/xxx/locations/us-central1/models/xxx", max_output_tokens=1024)
conversation_buf = ConversationChain(
    llm=llm,
    memory=ConversationBufferMemory()
)

Solution

  • There is another parameter called tuned_model_name where you can put the name of the tuned model into.

    llm = VertexAI(model_name="text-bison@001", 
                   tuned_model_name="projects/903327504912/locations/us-central1/models/9053578854223839232",
                   max_output_tokens=1024)