Search code examples
dockerelasticsearchdocker-composenest

Accessing Elasticsearch Docker instance using NEST


I run a simple Elasticsearch instance using Docker Compose:

---
version: '2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
    hostname: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
    ports:
      - 9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:6.1.1
    environment:
      SERVER_NAME: "0.0.0.0"
      ELASTICSEARCH_URL: http://elasticsearch:9200
    ports:
      - 5601:5601

I can access it from browser using localhost, however when I run my application and connect to it, I'm experiencing some issues. From what I was able to track it seems that application successfully connects to Elasticsearch instance, then resolves IP it is bound to and then uses that IP address to communicate with Elasticsearch instance.

From Fiddler:

  1. http://10.0.75.2:9200/_nodes/http,settings?flat_settings&timeout=2s
  2. It returns a json that has the following line: "host": "172.18.0.4"
  3. Then it tries to use this IP address and my requests fail because it cannot resolve that IP address

What should I change in order to be able to successfully connect to my Elasticsearch instance from C# application?

NEST version: 5.5.0


Solution

  • (Note: this answer uses NEST 7.1.0 and Elasticsearch 7.2.0, but the underlying concept is the same).

    SniffingConnectionPool will use the http.publish_address of the node when seeded in the connection pool. This means that the http publish address must be reachable by the client. If it's not explicitly set, it'll use the value from http.host, which if not set, will use the network.host, which will be the address on the private network.

    With a docker compose configuration like

    version: '2.2'
    services:
      es01:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        container_name: es01
        environment:
          - node.name=es01
          - discovery.seed_hosts=es02
          - cluster.initial_master_nodes=es01,es02
          - cluster.name=docker-cluster
          - bootstrap.memory_lock=true
          - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
          - "http.port=9200"
          - "http.publish_host=_local_"
        ulimits:
          memlock:
            soft: -1
            hard: -1
        volumes:
          - esdata01:/usr/share/elasticsearch/data
        ports:
          - 9200:9200
        networks:
          - esnet
      es02:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        container_name: es02
        environment:
          - node.name=es02
          - discovery.seed_hosts=es01
          - cluster.initial_master_nodes=es01,es02
          - cluster.name=docker-cluster
          - bootstrap.memory_lock=true
          - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
          - "http.port=9201"
          - "http.publish_host=_local_"
        ulimits:
          memlock:
            soft: -1
            hard: -1
        volumes:
          - esdata02:/usr/share/elasticsearch/data
        ports:
          - 9201:9201
        networks:
          - esnet
    
    volumes:
      esdata01:
        driver: local
      esdata02:
        driver: local
    
    networks:
      esnet:
    

    es01 node is mapped to localhost:9200 and es02 to localhost:9201. We could have just specified that es02 runs in the container on 9200, and mapped this to the host port of 9201, but the problem with doing this is that es02's http.publish_address would still be 127.0.0.1:9200, which is what the SniffingConnectionPool will end up using when seeding the node. To avoid this, we run es02 on a different port to es01, so that the http publish addresses will be different.

    With the above configuration, http://localhost:9200/_nodes?filter_path=nodes.*.http returns

    {
      "nodes": {
        "CSWncVnxS1esOm1KQtOR3A": {
          "http": {
            "bound_address": ["0.0.0.0:9200"],
            "publish_address": "127.0.0.1:9200",
            "max_content_length_in_bytes": 104857600
          }
        },
        "rOAp0T57TgSI_zU1L-T-vw": {
          "http": {
            "bound_address": ["0.0.0.0:9201"],
            "publish_address": "127.0.0.1:9201",
            "max_content_length_in_bytes": 104857600
          }
        }
      }
    }
    

    (node names will be different if you try this). Now, SniffingConnectionPool will work

    private static void Main()
    {
        var defaultIndex = "posts";
        var uris = new[]
        {
            new Uri("http://localhost:9200"),
            new Uri("http://localhost:9201")
        };
    
        var pool = new SniffingConnectionPool(uris);
    
        var settings = new ConnectionSettings(pool)
            .DefaultIndex(defaultIndex);
    
        var client = new ElasticClient(settings);
    
        var response = client.Nodes.Info();
    
        foreach (var node in response.Nodes)
        {
            Console.WriteLine($"{node.Key} http publish_address is: {node.Value.Http.PublishAddress}");
        }
    }
    

    prints

    CSWncVnxS1esOm1KQtOR3A http publish_address is: 127.0.0.1:9200
    rOAp0T57TgSI_zU1L-T-vw http publish_address is: 127.0.0.1:9201