I am trying to understand how Octavia is put together. I created a loadbalancer on a vlan network. It was assigned an address of 10.40.0.7. When I do openstack loadbalancer list, I see a vip_address of 10.40.0.7 which is not assigned to any amphorae.
I want to understand where the loadbalancer address is mapped. It is not a host. I can't ssh to that address. Perhaps it is the amphora driver but what exactly is that? I can't see that address find it in any namespace. I can't see it assigned to any bridge. What is it assigned to?
Thanks
Ranga
It is not a host.
It is a host! An amphora is just a nova server -- the same thing you get when you run openstack server create
. The difference is that the amphora is owned by the service
project, so you'll only see it if you were to run (as admin) openstack server list --all-projects
. For example:
$ openstack --os-cloud as_me loadbalancer list
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
| id | name | project_id | vip_address | provisioning_status | provider |
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
| 64a6a56d-beeb-4ee2-b495-1cdc26ffd399 | test_lb | 0ac1e30189da48b387cf3c2f5582b2a3 | 10.254.0.6 | ACTIVE | octavia |
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
$ openstack --os-cloud as_admin server list --all-projects | grep amphora
| f6cd75fe-8513-4aae-bee9-af9362525703 | amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af | ACTIVE | lb-mgmt-net=172.24.0.16; test_lb_net=10.254.0.11; test_net1=10.0.1.5; test_net0=10.0.0.4 | octavia-amphora-13.0-20181107.1.x86_64 | octavia_65 |
If you look at that server, you'll see it has several ip addresses:
You can ssh into the amphora using the management network address. You should be able to reach it from your controllers. You'll need the appropriate ssh key; where to find that probably depends a lot on how you installed things. I'm using tripleo, and it looks as if the install uses ~/.ssh/id_rsa
from the stack user for the amphora ssh key.
[controller ~]$ ssh -i amphora_private_key cloud-user@172.24.0.7
Last login: Thu Nov 15 22:01:16 2018 from 172.24.0.6
[cloud-user@amphora-7d48e10b-5ba4-42c9-bcd5-941d224b2a46 ~]$
Update
The loadbalancer VIP is assigned to an interface inside a namespace on the amphora. Given the above configuration, I see:
[root@amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af ~]# ip netns
amphora-haproxy (id: 0)
[root@amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af ~]# ip netns exec amphora-haproxy ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:07:d2:26 brd ff:ff:ff:ff:ff:ff
inet 10.254.0.11/24 brd 10.254.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.254.0.6/24 brd 10.254.0.255 scope global secondary eth1:0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe07:d226/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:21:9a:d1 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.4/24 brd 10.0.0.255 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe21:9ad1/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:2a:63:58 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.5/24 brd 10.0.1.255 scope global eth3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe2a:6358/64 scope link
valid_lft forever preferred_lft forever