When trying to access the ollama container from another (node) service in my docker compose setup, I get the following error:
ResponseError: model 'llama3' not found, try pulling it first
I want the setup for the containers to be automatic and don't want to manually connect to the containers and manually pull the models. Is there a way to load the model of my choice automatically when I create the ollama docker container?
Here is my relevant part of the docker-compose.yml
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always
Use a custom entrypoint script to download the model when a container is launched. The model will be persisted in the volume mount, so this will go quickly with subsequent starts.
version: '3.7'
services:
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
- ./entrypoint.sh:/entrypoint.sh
container_name: ollama
pull_policy: always
tty: true
restart: always
entrypoint: ["/usr/bin/bash", "/entrypoint.sh"]
Key changes:
entrypoint
.entrypoint.sh
.This is the content of entrypoint.sh
:
#!/bin/bash
# Start Ollama in the background.
/bin/ollama serve &
# Record Process ID.
pid=$!
# Pause for Ollama to start.
sleep 5
echo "🔴 Retrieve LLAMA3 model..."
ollama pull llama3
echo "🟢 Done!"
# Wait for Ollama process to finish.
wait $pid