Search code examples
linuxmongodbdockerdocker-composemongodb-compass

MongoDB 7 with 3 Replica Sets - Username Password Auth - Security Key File


I'm using Portainer (Community 2.19.4) on my localhost (Arch based linux distro: EndeavourOS 2023.05.28) to manage docker containers...

I want to setup a MongoDB cluster set (1 primary, 2 secondaries = Total 3 replica sets) with latest MongoDB which is number 7.0.5. Also this is local, but I want to make sure for security, so I added username and password, also security key file with openssl. Anyway,

version: '3.8'

services:
  mongo1:
    image: mongo:7
    volumes:
      - /home/user/ducker/mongo-three-replica/rep1/data:/data/db
      - /home/user/ducker/mongo/rep.key:/opt/keyfile/mongo-keyfile
    environment:
      MONGO_INITDB_ROOT_USERNAME: your_username
      MONGO_INITDB_ROOT_PASSWORD: your_password
    command: "--replSet rs0 --keyFile /opt/keyfile/mongo-keyfile"
    ports:
      - "27017:27017"
    networks:
      - mongo-cluster

  mongo2:
    image: mongo:7
    volumes:
      - /home/user/ducker/mongo-three-replica/rep2/data:/data/db
      - /home/user/ducker/mongo/rep.key:/opt/keyfile/mongo-keyfile
    environment:
      MONGO_INITDB_ROOT_USERNAME: your_username
      MONGO_INITDB_ROOT_PASSWORD: your_password
    command: "--replSet rs0 --keyFile /opt/keyfile/mongo-keyfile"
    ports:
      - "27018:27017"
    networks:
      - mongo-cluster

  mongo3:
    image: mongo:7
    volumes:
      - /home/user/ducker/mongo-three-replica/rep3/data:/data/db
      - /home/user/ducker/mongo/rep.key:/opt/keyfile/mongo-keyfile
    environment:
      MONGO_INITDB_ROOT_USERNAME: your_username
      MONGO_INITDB_ROOT_PASSWORD: your_password
    command: "--replSet rs0 --keyFile /opt/keyfile/mongo-keyfile"
    ports:
      - "27019:27017"
    networks:
      - mongo-cluster

  rs-init:
    image: mongo:7
    depends_on:
      - mongo1
      - mongo2
      - mongo3
    networks:
      - mongo-cluster
    command: >
      bash -c "until mongosh --host mongo1:27017 --username your_username --password your_password --eval 'print(\"waiting for mongo1\")'; do sleep 2; done &&
             until mongosh --host mongo2:27017 --username your_username --password your_password --eval 'print(\"waiting for mongo2\")'; do sleep 2; done &&
             until mongosh --host mongo3:27017 --username your_username --password your_password --eval 'print(\"waiting for mongo3\")'; do sleep 2; done &&
             mongosh --host mongo1:27017 --username your_username --password your_password --eval '
             rs.initiate({
               _id: \"rs0\",
               members: [
                 { _id: 0, host: \"mongo1:27017\" },
                 { _id: 1, host: \"mongo2:27017\" },
                 { _id: 2, host: \"mongo3:27017\" }
               ]
             })'"
    restart: "no"

networks:
  mongo-cluster:
    driver: bridge

As you can see, all mongodb containers on same docker network (mongo-cluster) and driver is bridge.

Also, when I inspect all containers, there are no errors. But the problem is, I can not connect those databases via MongoDB Compass or Robo3T.

When I use this connection string;

mongodb://your_username:your_password@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0

MongoDB Compass says; "getaddrinfo ENOTFOUND mongo2" the number changes everytime, mongo1, mongo3 etc.

However, when I use this connection string;

mongodb://your_username:your_password@localhost:27017/?replicaSet=rs0&authSource=admin&directConnection=true

I can connect. Here is "rs.status();" results as well;

{
  set: 'rs0',
  date: 2024-01-19T04:37:25.215Z,
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1705639038, i: 1 }), t: Long('1') },
    lastCommittedWallTime: 2024-01-19T04:37:18.999Z,
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1705639038, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1705639038, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1705639038, i: 1 }), t: Long('1') },
    lastAppliedWallTime: 2024-01-19T04:37:18.999Z,
    lastDurableWallTime: 2024-01-19T04:37:18.999Z
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1705639038, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: 2024-01-19T03:46:28.896Z,
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1705635978, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1705635978, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: 2024-01-19T03:46:28.933Z,
    wMajorityWriteAvailabilityDate: 2024-01-19T03:46:29.458Z
  },
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 3068,
      optime: [Object],
      optimeDate: 2024-01-19T04:37:18.000Z,
      lastAppliedWallTime: 2024-01-19T04:37:18.999Z,
      lastDurableWallTime: 2024-01-19T04:37:18.999Z,
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1705635988, i: 1 }),
      electionDate: 2024-01-19T03:46:28.000Z,
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 3066,
      optime: [Object],
      optimeDurable: [Object],
      optimeDate: 2024-01-19T04:37:18.000Z,
      optimeDurableDate: 2024-01-19T04:37:18.000Z,
      lastAppliedWallTime: 2024-01-19T04:37:18.999Z,
      lastDurableWallTime: 2024-01-19T04:37:18.999Z,
      lastHeartbeat: 2024-01-19T04:37:24.921Z,
      lastHeartbeatRecv: 2024-01-19T04:37:23.916Z,
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo1:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: 'mongo3:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 3066,
      optime: [Object],
      optimeDurable: [Object],
      optimeDate: 2024-01-19T04:37:18.000Z,
      optimeDurableDate: 2024-01-19T04:37:18.000Z,
      lastAppliedWallTime: 2024-01-19T04:37:18.999Z,
      lastDurableWallTime: 2024-01-19T04:37:18.999Z,
      lastHeartbeat: 2024-01-19T04:37:24.921Z,
      lastHeartbeatRecv: 2024-01-19T04:37:23.916Z,
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo1:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1705639038, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('dQbmbbD3p1ac17YJFU75jyIg1mU=', 0),
      keyId: Long('7325650787340648454')
    }
  },
  operationTime: Timestamp({ t: 1705639038, i: 1 })
}

So, what am I doing wrong? I need to connect to the entire replica set (and not just a single node) using a MongoDB client.


Solution

  • Ok, adding containers ip and names to hosts file (/etc/hosts) resolved my problem. Now, I wonder; Are there any other alternative ways to solve this error.