My yaml is as follows
version: '3.4'
services:
weaviate:
image: cr.weaviate.io/semitechnologies/weaviate:1.25.0
restart: on-failure:0
ports:
- 8080:8080
- 50051:50051
volumes:
- /home/nitin/repo/central/Gen_AI/weaviate/volume:/var/lib/weaviate
environment:
QUERY_DEFAULTS_LIMIT: 20
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: "/var/lib/weaviate"
DEFAULT_VECTORIZER_MODULE: 'none'
ENABLE_MODULES: text2vec-transformers
TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
CLUSTER_HOSTNAME: 'tenant_1'
when I try to spin up a local weaviate server with
sudo docker compose -f ./docker-compose.yaml up
I get
{"action":"transformer_remote_wait_for_startup","error":"send check ready request: Get \"http://t2v-transformers:8080/.well-known/ready\": dial tcp: lookup t2v-transformers on 127.0.0.11:53: server misbehaving","level":"warning","msg":"transformer remote inference service not ready","time":"2024-05-20T07:09:47Z"}
I do not know my way around the yaml much. Please help me get the weaviate service running locally
That's because of ENABLE_MODULES: text2vec-transformers
If you enable the transformers module (docs) Weaviate expects that it can reach that service (in this specific example here: TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
). In your Dockerfile, t2v-transformers
isn't created, hence Weaviate can't find it.
To solve your issue, you can do two things.
You want this if you bring your own vector embeddings.
You simply run it without any modules, as can be found in the docs here.
---
version: '3.4'
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image: cr.weaviate.io/semitechnologies/weaviate:1.25.1
ports:
- 8080:8080
- 50051:50051
volumes:
- weaviate_data:/var/lib/weaviate
restart: on-failure:0
environment:
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
DEFAULT_VECTORIZER_MODULE: 'none'
ENABLE_MODULES: ''
CLUSTER_HOSTNAME: 'node1'
volumes:
weaviate_data:
...
You should use this one if you want to locally vectorize your data using the transformers container, as can be found in the docs here.
version: '3.4'
services:
weaviate:
image: cr.weaviate.io/semitechnologies/weaviate:1.25.1
restart: on-failure:0
ports:
- 8080:8080
- 50051:50051
environment:
QUERY_DEFAULTS_LIMIT: 20
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: "./data"
DEFAULT_VECTORIZER_MODULE: text2vec-transformers
ENABLE_MODULES: text2vec-transformers
TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
CLUSTER_HOSTNAME: 'node1'
t2v-transformers:
image: cr.weaviate.io/semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
environment:
ENABLE_CUDA: 0 # set to 1 to enable
# NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDA