当前位置: 代码迷 >> 综合 >> How to test Neutron VRRP HA rapidly (by quqi99)
  详细解决方案

How to test Neutron VRRP HA rapidly (by quqi99)

热度:35   发布时间:2023-12-13 08:58:44.0

作者:张华  发表于:2015-12-09
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

(http://blog.csdn.net/quqi99 )

neutron vrrp ha still do not support conntrack feature now.

1, Setting up test environment

juju add-model queens-vrrp
juju deploy ./b/openstack.yaml

juju add-unit neutron-gateway
juju ssh neutron-gateway/1 -- hostname
source ~/novarc && nova interface-attach $(openstack server list -f value |awk '/juju-4262ae-queens-vrrp-10/ {print $1}') --net-id=$(openstack network list -f value |awk '/ zhhuabj_admin_net / {print $1}')
juju ssh neutron-gateway/1 -- sudo ovs-vsctl add-port br-data ens7
juju ssh neutron-gateway/1 -- sudo ifconfig ens7 up
juju config neutron-api overlay-network-type=vxlan
juju config neutron-api enable-l3ha=true

./configure
source ~/stsstack-bundles/novarc
./tools/instance_launch.sh 1 xenial
./tools/sec_groups.sh

fix_ip=$(openstack server list -f value |awk '/xenial/ {print $4}' |awk -F '=' '{print $2}')
public_network=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
openstack floating ip set $fip --fixed-ip-address $fix_ip --port $(openstack port list --fixed-ip ip-address=$fix_ip -c id -f value)

# update the existing router as ha router
#ROUTER_ID=$(neutron router-show provider-router -c id -f value)
#neutron router-update $ROUTER_ID --admin_state_up=false ; neutron router-update $ROUTER_ID --ha=true; neutron router-update $ROUTER_ID --admin_state_up=true
#AGENT_ID=$(neutron l3-agent-list-hosting-router $ROUTER_ID |grep active |awk -F '|' '{print $2}')
#neutron l3-agent-router-remove $AGENT_ID $ROUTER_ID
#neutron l3-agent-router-add $AGENT_ID $ROUTER_ID

# enable dvr
#juju config neutron-api l2-population=false enable-l3ha=true
#neutron router-update --admin-state-up False provider-router
#neutron router-update provider-router --distributed True --ha=True
#neutron router-update --admin-state-up True provider-router

# test patch - https://review.opendev.org/#/c/601533/
git clone https://github.com/openstack/charm-neutron-gateway.git neutron-gateway
cd neutron-gateway/
git fetch https://review.opendev.org/openstack/charm-neutron-gateway refs/changes/33/601533/1 && git format-patch -1 --stdout FETCH_HEAD > lp1732154.patch
git checkout master
patch -p1 < lp1732154.patch
juju upgrade-charm neutron-gateway --path $PWD

wget https://gist.githubusercontent.com/dosaboy/cf8422f16605a76affa69a8db47f0897/raw/8e045160440ecf0f9dc580c8927b2bff9e9139f6/check_router_vrrp_transitions.sh
chmod +x check_router_vrrp_transitions.sh
./check_router_vrrp_transitions.sh

ubuntu@juju-4262ae-queens-vrrp-5:~$ bash check_router_vrrp_transitions.sh
Analysing keepalived vrrp transitions...1 active vrouters found (total 1):
router=b8d4435b-bd83-46fd-a828-6d8a0b52d23a (current=false, vrid=VR_1, pid=16716, first=Apr-23-01:48:20, last=Apr-23-01:57:05) had 2 transition(s)
router=b8d4435b-bd83-46fd-a828-6d8a0b52d23a (current=true, vrid=VR_1, pid=24269, first=Apr-23-02:22:16, last=Apr-23-02:22:28) had 2 transition(s) (state=MASTER)Done.ubuntu@zhhuabj-bastion:~$ juju ssh neutron-gateway/0 -- sudo cat /var/lib/neutron/ha_confs/b8d4435b-bd83-46fd-a828-6d8a0b52d23a/ha_check_script_1.sh
#!/bin/bash -eu
ip a | grep fe80::f816:3eff:fe2d:db49 || exit 0
ping -c 1 -w 1 10.5.0.1 1>/dev/null || exit 1ubuntu@zhhuabj-bastion:~$ juju ssh neutron-gateway/0 -- cat /var/lib/neutron/ha_confs/b8d4435b-bd83-46fd-a828-6d8a0b52d23a/keepalived.conf
global_defs {notification_email_from neutron@openstack.localrouter_id neutron
}vrrp_script ha_health_check_1 {script "/var/lib/neutron/ha_confs/b8d4435b-bd83-46fd-a828-6d8a0b52d23a/ha_check_script_1.sh"interval 30fall 2rise 2
}vrrp_instance VR_1 {state BACKUPinterface ha-c172aff0-67virtual_router_id 1priority 50garp_master_delay 60nopreemptadvert_int 2track_interface {ha-c172aff0-67}virtual_ipaddress {169.254.0.1/24 dev ha-c172aff0-67}virtual_ipaddress_excluded {10.5.150.0/16 dev qg-aa1ed68a-d010.5.150.2/32 dev qg-aa1ed68a-d0192.168.21.1/24 dev qr-00aa8199-1dfe80::f816:3eff:fe2d:db49/64 dev qr-00aa8199-1d scope linkfe80::f816:3eff:feeb:a6de/64 dev qg-aa1ed68a-d0 scope link}virtual_routes {0.0.0.0/0 via 10.5.0.1 dev qg-aa1ed68a-d0}track_script {ha_health_check_1}
}


Some important configurations in neutron.conf are as below:
l3_ha = True
max_l3_agents_per_router = 2
min_l3_agents_per_router = 2
allow_automatic_l3agent_failover = False

2, Future Work & Limitations
http://assafmuller.com/2014/08/16/layer-3-high-availability/
http://blog.aaronorosen.com/implementing-high-availability-instances-with-neutron-using-vrrp/

  • TCP connection tracking – With the current implementation, TCP sessions are broken on failover. The idea is to use conntrackd in order to replicate the session states across HA routers, so that when the failover finishes, TCP sessions will continue where they left off.
  • Where is the master instance hosted? As it is now it is impossible for the admin to know which network node is hosting the master instance of a HA router. The plan is for the agents to report this information and for the server to expose it via the API.
  • Evacuating an agent – Ideally bringing down a node for maintenance should cause all of the HA router instances on said node to relinquish their master states, speeding up the failover process.
  • Notifying L2pop of VIP movements – Consider the IP/MAC of the router on a tenant network. Only the master instance will actually have the IP configured, but the same Neutron port and same MAC will show up on all participating network nodes. This might have adverse effects on the L2pop mechanism driver, as it expects a MAC address in a single location in the network. The plan to solve this deficiency is to send an RPC message from the agent whenever it detects a VRRP state change, so that when a router becomes the master, the controller is notified, which can then update the L2pop state.
  • FW, VPN and LB as a service integration. Both DVR and L3 HA have issues integrating with the advanced services, and a more serious look will be taken during the Kilo cycle.
  • One HA network per tenant. This implies a limit of 255 HA routers per tenant, as each router takes up a VRID, and the VRRP protocol allows 255 distinct VRID values in a single broadcast domain.



3, Test steps and data

ubuntu@zhhuabj-bastion:~/openstack-charm-testing$ neutron net-list
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| id                                   | name                                               | subnets                                               |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| 70773c83-08fe-4efe-b601-4f04522d867f | ext_net                                            | 6aed68fb-be4a-4a33-b53e-c2b742ac9985 10.5.0.0/16      |
| 98e10e32-13eb-48ee-b265-4ae0e449b6e5 | private                                            | b6afcd08-f3b6-4224-a125-3befa4b34a63 192.168.21.0/24  |
| bc699825-19e1-4bf6-a0cf-fe90251ad391 | HA network tenant b31bb0f325fd4ec291a5a65d3dc11a63 | 20173269-0eb4-4f3f-9557-807c69f6e258 169.254.192.0/18 |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+


ubuntu@zhhuabj-bastion:~/openstack-charm-testing$ neutron router-list
+--------------------------------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id                                   | name            | external_gateway_info                                                                                                                                                                  |
+--------------------------------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| dd74a4c6-8320-40d6-b3b1-232e578cb6c7 | provider-router | {"network_id": "70773c83-08fe-4efe-b601-4f04522d867f", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "6aed68fb-be4a-4a33-b53e-c2b742ac9985", "ip_address": "10.5.150.0"}]} |
+--------------------------------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
ubuntu@zhhuabj-bastion:~/neutron-gateway-ha$ neutron l3-agent-list-hosting-router provider-router
+--------------------------------------+-------------------------+----------------+-------+
| id                                   | host                    | admin_state_up | alive |
+--------------------------------------+-------------------------+----------------+-------+
| 99ca5ee5-33b6-43da-86e2-df1f16fbd7cb | juju-zhhuabj-machine-23 | True           | :-)   |
| c95e33d5-b080-458c-8adb-d15bafc16bfb | juju-zhhuabj-machine-12 | True           | :-)   |
+--------------------------------------+-------------------------+----------------+-------+


ubuntu@zhhuabj-bastion:~/openstack-charm-testing$ nova list
+--------------------------------------+------+--------+------------+-------------+----------------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                         |
+--------------------------------------+------+--------+------------+-------------+----------------------------------+
| ebfe6140-580c-43dd-b582-5e708eba58d8 | i1   | ACTIVE | -          | Running     | private=192.168.21.5, 10.5.150.1 |
+--------------------------------------+------+--------+------------+-------------+----------------------------------+


ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 ip addr show
2: ha-786d2e2f-a8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:f5:c4:16 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-786d2e2f-a8
       valid_lft forever preferred_lft forever
    inet 169.254.0.1/24 scope global ha-786d2e2f-a8
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef5:c416/64 scope link
       valid_lft forever preferred_lft forever
3: qr-de8b99fa-7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:9f:a3:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.1/24 scope global qr-de8b99fa-7c
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe9f:a31b/64 scope link
       valid_lft forever preferred_lft forever
4: qg-aeb51a1d-32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:13:a9:24 brd ff:ff:ff:ff:ff:ff
    inet 10.5.150.0/16 scope global qg-aeb51a1d-32
       valid_lft forever preferred_lft forever
    inet 10.5.150.1/32 scope global qg-aeb51a1d-32
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:a924/64 scope link
       valid_lft forever preferred_lft forever


ubuntu@juju-zhhuabj-machine-23:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 ip addr show
2: ha-37f7144e-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:04:b4:ce brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-37f7144e-02
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe04:b4ce/64 scope link
       valid_lft forever preferred_lft forever
3: qr-de8b99fa-7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:9f:a3:1b brd ff:ff:ff:ff:ff:ff
4: qg-aeb51a1d-32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:13:a9:24 brd ff:ff:ff:ff:ff:ff


ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.5.0.1        0.0.0.0         UG    0      0        0 qg-aeb51a1d-32
10.5.0.0        0.0.0.0         255.255.0.0     U     0      0        0 qg-aeb51a1d-32
169.254.0.0     0.0.0.0         255.255.255.0   U     0      0        0 ha-786d2e2f-a8
169.254.192.0   0.0.0.0         255.255.192.0   U     0      0        0 ha-786d2e2f-a8
192.168.21.0    0.0.0.0         255.255.255.0   U     0      0        0 qr-de8b99fa-7c


ubuntu@juju-zhhuabj-machine-23:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
169.254.192.0   0.0.0.0         255.255.192.0   U     0      0        0 ha-37f7144e-02


ubuntu@juju-zhhuabj-machine-12:~$ ps -ef|grep ha
root        71     2  0 06:55 ?        00:00:00 [charger_manager]
neutron   5597     1  0 08:14 ?        00:00:00 /usr/bin/python /usr/bin/neutron-keepalived-state-change --router_id=dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --namespace=qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --conf_dir=/var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --monitor_interface=ha-786d2e2f-a8 --monitor_cidr=169.254.0.1/24 --pid_file=/var/lib/neutron/external/pids/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.monitor.pid --state_path=/var/lib/neutron --user=108 --group=112
root      5649     1  0 08:14 ?        00:00:00 keepalived -P -f /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf -p /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid -r /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid-vrrp
root     11780  5649  0 09:02 ?        00:00:00 keepalived -P -f /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf -p /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid -r /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid-vrrp


ubuntu@juju-zhhuabj-machine-23:~$ ps -ef |grep ha
root        71     2  0 07:09 ?        00:00:00 [charger_manager]
root      6060 32207  0 09:02 ?        00:00:00 keepalived -P -f /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf -p /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid -r /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid-vrrp
neutron  32142     1  0 08:14 ?        00:00:00 /usr/bin/python /usr/bin/neutron-keepalived-state-change --router_id=dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --namespace=qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --conf_dir=/var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7 --monitor_interface=ha-37f7144e-02 --monitor_cidr=169.254.0.1/24 --pid_file=/var/lib/neutron/external/pids/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.monitor.pid --state_path=/var/lib/neutron --user=108 --group=112
root     32207     1  0 08:14 ?        00:00:00 keepalived -P -f /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf -p /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid -r /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7.pid-vrrp


ubuntu@juju-zhhuabj-machine-12:~$ sudo cat /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/state
master
ubuntu@juju-zhhuabj-machine-12:~$ sudo cat /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf
vrrp_instance VR_1 {
    state BACKUP
    interface ha-786d2e2f-a8
    virtual_router_id 1
    priority 50
    garp_master_repeat 5
    garp_master_refresh 10
    nopreempt
    advert_int 2
    track_interface {
        ha-786d2e2f-a8
    }
    virtual_ipaddress {
        169.254.0.1/24 dev ha-786d2e2f-a8
    }
    virtual_ipaddress_excluded {
        10.5.150.0/16 dev qg-aeb51a1d-32
        10.5.150.1/32 dev qg-aeb51a1d-32
        192.168.21.1/24 dev qr-de8b99fa-7c
        fe80::f816:3eff:fe13:a924/64 dev qg-aeb51a1d-32 scope link
        fe80::f816:3eff:fe9f:a31b/64 dev qr-de8b99fa-7c scope link
    }
    virtual_routes {
        0.0.0.0/0 via 10.5.0.1 dev qg-aeb51a1d-32
    }
}


ubuntu@juju-zhhuabj-machine-23:~$ sudo cat /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/state
backup
ubuntu@juju-zhhuabj-machine-23:~$ sudo cat /var/lib/neutron/ha_confs/dd74a4c6-8320-40d6-b3b1-232e578cb6c7/keepalived.conf
vrrp_instance VR_1 {
    state BACKUP
    interface ha-37f7144e-02
    virtual_router_id 1
    priority 50
    garp_master_repeat 5
    garp_master_refresh 10
    nopreempt
    advert_int 2
    track_interface {
        ha-37f7144e-02
    }
    virtual_ipaddress {
        169.254.0.1/24 dev ha-37f7144e-02
    }
    virtual_ipaddress_excluded {
        10.5.150.0/16 dev qg-aeb51a1d-32
        10.5.150.1/32 dev qg-aeb51a1d-32
        192.168.21.1/24 dev qr-de8b99fa-7c
        fe80::f816:3eff:fe13:a924/64 dev qg-aeb51a1d-32 scope link
        fe80::f816:3eff:fe9f:a31b/64 dev qr-de8b99fa-7c scope link
    }
    virtual_routes {
        0.0.0.0/0 via 10.5.0.1 dev qg-aeb51a1d-32
    }
}


ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 ifconfig ha-786d2e2f-a8 down



ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ha-786d2e2f-a8: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether fa:16:3e:f5:c4:16 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-786d2e2f-a8
       valid_lft forever preferred_lft forever
3: qr-de8b99fa-7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:9f:a3:1b brd ff:ff:ff:ff:ff:ff
4: qg-aeb51a1d-32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:13:a9:24 brd ff:ff:ff:ff:ff:ff


ubuntu@juju-zhhuabj-machine-23:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ha-37f7144e-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:04:b4:ce brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-37f7144e-02
       valid_lft forever preferred_lft forever
    inet 169.254.0.1/24 scope global ha-37f7144e-02
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe04:b4ce/64 scope link
       valid_lft forever preferred_lft forever
3: qr-de8b99fa-7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:9f:a3:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.1/24 scope global qr-de8b99fa-7c
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe9f:a31b/64 scope link
       valid_lft forever preferred_lft forever
4: qg-aeb51a1d-32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:13:a9:24 brd ff:ff:ff:ff:ff:ff
    inet 10.5.150.0/16 scope global qg-aeb51a1d-32
       valid_lft forever preferred_lft forever
    inet 10.5.150.1/32 scope global qg-aeb51a1d-32
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:a924/64 scope link
       valid_lft forever preferred_lft forever

ubuntu@zhhuabj-bastion:~/neutron-gateway-ha$ ping 10.5.150.1
PING 10.5.150.1 (10.5.150.1) 56(84) bytes of data.
64 bytes from 10.5.150.1: icmp_seq=1 ttl=63 time=5.00 ms
64 bytes from 10.5.150.1: icmp_seq=2 ttl=63 time=1.69 ms


ubuntu@juju-zhhuabj-machine-23:~$ sudo ip netns exec qrouter-dd74a4c6-8320-40d6-b3b1-232e578cb6c7 tcpdump -n -i ha-37f7144e-02
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ha-37f7144e-02, link-type EN10MB (Ethernet), capture size 65535 bytes
03:59:45.723931 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:47.724833 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:49.727033 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:51.726947 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:53.728466 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:55.730700 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:57.731620 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
03:59:59.733136 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:01.734053 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:03.734711 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:05.737778 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:07.737146 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:09.739794 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:11.739332 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:13.740744 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:15.742741 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:17.744151 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:19.745413 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:21.746639 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:23.748244 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:25.750017 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:27.751713 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:29.752460 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:31.753797 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:33.755248 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:35.757904 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:37.759372 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:39.760449 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:41.761482 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:43.762326 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:45.764282 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:47.763669 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:49.764741 IP 169.254.192.2 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:56.570867 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:00:58.572437 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:00:58.572504 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:00:58.572542 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:00:58.572581 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:00:58.572609 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:00:58.572665 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:00.573412 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:02.574520 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:03.573747 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:01:03.573850 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:01:03.573898 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:01:03.573948 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:01:03.573993 ARP, Request who-has 169.254.0.1 (ff:ff:ff:ff:ff:ff) tell 169.254.0.1, length 28
04:01:04.575330 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:06.576547 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:08.577765 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:10.578863 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:12.579946 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:14.581031 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:16.582120 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:18.583161 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:20.584254 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:22.585400 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:24.586493 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:26.587590 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:28.588677 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:30.589448 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:32.590557 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:34.591728 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:36.592887 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:38.594052 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:40.595402 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:42.596719 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:44.597883 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:46.599080 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20
04:01:48.600390 IP 169.2

20160304更新

Keepalived有一个bug,在配置改变时会做一个不必要的dns查询操作(可使用命令ip netns exec qrouter-xxx  tcpdump -vvv -s 0 -l -n port 53查看),如果qrouter-名空间无法访问dns服务器的话会造成master节点在做dns查询时block一分钟左右,这个时间backup结点会变成master节点。见Bug:

 https://bugzilla.redhat.com/show_bug.cgi?id=1181592 

https://bugs.launchpad.net/neutron/+bug/1511722

Workaround是添加hostname:

dig A $(hostname) | grep -A1 "ANSWER SEC" | tail -n 1 | awk '{print $NF " " $1}' | sed -e 's/.$//g'  >>/etc/hosts ;   grep $(hostname) /etc/hosts || echo "Failure setting up the hostname entry"

另一些bug (ha接口已经变成active之后再加keepalived, 这样neutron-keepalived-state-change进程的'ip netns exec xxx ip -o monitor address'感知到ha-xxx口的变化后通知neutron-server将router变成master): 

20171215更新

一个实际的case,遇到了multi-active或者全部是standby的问题,问题最后查出来包括:

1, l3-agent上的router过多时在syslog中会看到如下错误,除了按下列的步骤增大ulimit,也可能需要设置ovs_vsctl_timeout=60 (20111108更新, of_inactivity_probe也是一个参数, 见- https://review.opendev.org/#/c/660074/) (20111109更新, 另一个参数是mac-table-size, Bug #1775797 “The mac table size of neutron bridges (br-tun, br-...” : Bugs : neutron),另外将ovs将cpu核绑定
"hostname ovs-vswitchd: ovs|1762125|netlink_socket|ERR|fcntl: Too many open files"
sudo lsof -p 2279  #the file num opened by a process
sudo prlimit -p 2279 --nofile=131070  #If need to persistence it, pls modify /etc/security/limits.d/ovs.conf

cat /etc/security/limits.d/ovs.conf

root soft nofile 131070 
root hard nofile 1048576 


2, rabbitmq cluster有性能问题,暂时改一non-cluster rabbitmq


3, 修改后,router的状态并不会自己同时,因为如果之前是rabbitmq的问题,neutron-keepalived-state-change通过ip monitor监控到vrrp vip变化的event丢失,rabbitmq恢复后,这个event可能不会再发一遍(https://github.com/openstack/neutron/blob/stable/ocata/neutron/agent/l3/keepalived_state_change.py#L71)所以也需要重启l3-agent


4, 下面这些fixed patches也都是必须的

https://review.openstack.org/#/c/470905/ 
https://review.openstack.org/#/c/357458/
https://review.openstack.org/#/c/454657/
https://review.openstack.org/522792       (ocata)

https://review.openstack.org/#/c/522641

节点down了,其他节点升级成master, 但是old master节点的ha port在DB里可能仍然是active状态,l3-agent重启后就会仍然spawn keepalived进程从而导致同时出现两个master节点。可以在l3-agent重启时将all ha ports的状态重置成DOWN解决。
见:https://review.openstack.org/#/c/470905/ , 它在fetch_and_sync_all_routers(状态为AGENT_REVIVED才运行, agent周期性失去心跳时才为此状态) -> get_router_ids()中将所有ha ports设置成DOWN,
但有时候l3-agent无法及时报告心跳信息时,其agent的状态会被设置成AGENT_REVIVED,然后触发上面的将ha ports设置成DOWN, 然后l3-agent需要反复的处理同一个router发现它是ha_ports并且是active状态就enable_keepalived,这样l3-agent与l2-agent的负担都重反过来更加加重AGENT_REVIVED状态。所以需要将设置成DOWN的状态在重启l3-agent时才做一次(__init__ -> get_service_plugin_list -> _update_ha_network_port_status),见: https://review.openstack.org/#/c/522792/
另一个原因: 如果先重启l3-agent再重启l2-agent, 在ovs-l2-agent还没将ha port准备好变成active状态之前,l3-agent就不应该处理ha_port并生成了keepalived实例。见:https://review.openstack.org/#/c/357458/
ha_vrrp_health_check_interval选项支持ping GW然后实现主备切换。见:https://review.openstack.org/#/c/454657

当router很多时,可能也需要增大ha_vrrp_advert_int (lp: 1749425), 也需要设置mhash_entries=16000000 mphash_entries=16000000 (lp: 1376958)

  • Neutron-vrrp itself bug

    • keepalived instances should not be handled if ha ports are NOT ready (like first restart l3-agent then restart l2-agent) - https://review.openstack.org/#/c/357458/

    • Master restart so slave will switch to new master, but old master doesn’t switch to slave because it’s initial DB status is active, so initial DB status should be set into standby, this patch do so when agent’s status is  AGENT_REVIVED - https://review.openstack.org/#/c/470905/

Lots of VRRP problem is comprehensive problem, not caused by neutron-vrrp itself:

  • Keepalived’s bug

    • Dns problem causes slave be unable to switch to master - Bug #1511722 “VM loses connectivity on floating ip association w...” : Bugs : neutron

  • Neutron’s inappropriate settings

    • Performance problem caused by the small ovs_vsctl_timeout

  • Any performance problems can lead to vrrp problem as well. For example:

    • rabbitmq's performance problem can cause l3-agent's heartbeat not to be sent to neutron-api, then neutron-api will think agent has been dead (AGENT_REVIVED) so it will set the initial status into standby by using above fix (https://review.openstack.org/#/c/470905/). So this patch just set initial status into standby when restarting l3-agent instead of  AGENT_REVIVED - https://review.openstack.org/#/c/522792/

    • Performance problem lead to vrrp can not be sent out in a short ha_vrrp_health_check_interval - https://review.openstack.org/#/c/454657

    • Performance problem caused by small mhash_entries - Bug #1376958 “Quantum-gateway charm should set mhash_entries=160...” : Bugs : neutron-gateway package : Juju Charms Collection

    • Performance problem caused by ulimit and Others ...

20180815更新

采用上面种种方法后仍然可能出现multiple master的问题(20210916遇到multiple master问题是由于apparmor deny neutron-rrootwrap request导致keealived无法更新),我们来分析一下原因:

openstack vrrp相关的进程有两个, 一个是neutron-keepalived-state-change is spawned由ha_router#initialize()初始化, 一个是keepalived由ha_router#process()初始化。

并且_process_added_router会调用neutron-keepalived-state-change, _process_updated_router会调用keepalived
l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_added_router -> ha_router#initialize()
l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_updated_router -> ha_router#process()

这种正常情况会保证neutron-keepalived-state-change先于keepalived进程启动 ( 20190320更新, 这个问题, 可以这样解决, 在neutron-keepalived-state-change启动之后根据节点上若有keepalived设置的vip就设置初始状态为master, 否则为slave - https://github.com/openstack/neutron/commit/5bcca13f4a58ee5541ae81a45b89f783194f1279 )(Bug #1818614 “[SRU] Various L3HA functional tests fails often” : Bugs : neutron),并且这两个进程都被process_monitor.register方法注册进了external_process.py, 一旦这两个进程挂掉的后它会通过external_process.py#_respawn_action保证重启,但重启是没顺序的。比如,keepalived进程刚死, neutron-keepalived-state-change进程还没来得及通过MQ更新DB之前它也挂掉的话, 这时其他节点已经变成master了, 在这台节点上的这些进程再重启的时候也由因为认为它是master(根本原因在于keepalived没有提供一个查询当时谁是master的api,这样可以在每个节点上的keepalived启动时都先调用这个api检查一下有没有master,有master的话说明状态不一致就将自己变成backup啊。如用在master上运行下列命令很容易重现问题:

r=909c6b55-9bc6-476f-9d28-c32d031c41d7
pkill -f "/usr/bin/neutron-keepalived-state-change --router_id=$r"
sudo pkill -f "/var/lib/neutron/ha_confs/$r/keepalived.conf"

但这种问题只是在DB中看到multiple masters (neutron l3-agent-list-hosting-router provider-router), 在keepalived层面是正常的(sudo ip netns exec qdhcp-ab1a14d8-3b97-4e5e-9150-081c4afa5729 ping 192.168.21.1)


https://review.openstack.org/#/c/273546 这个patch通过ha_vrrp_health_check_interval提供了另外一种解决multiple master的办法,每个节点都定期ping网节,ping不通的话就说明出现multiple master了, 并且keepalived会根据ha_vrrp_health_check_interval重新选举直至选出新master为止。但是它依然有几个问题:

1, 它是ping外网网关10.5.0.1, 而不是tenant网关192.168.21.1, 所以它对解决上面问题依然无效.

2, 它本身有问题, 会造成VRRP状态反复的切换 - Bug #1793102 “ha_vrrp_health_check_interval causes constantly VR...” : Bugs : neutron

这个问题包括两个层面, 第一个是neutron db层面, 这是neutron的设计缺陷造成的, neutron没有调用keepalived获得状态, 而是通过ip monitor去维护状态, 这样必然有可能失去同步从而造成neutron这边看到多个master(如同时kill掉keepalived与neutron-keepalived-state-change进程), 但这点如果keepalived层面没有问题不会造成网络不通的问题, 顶多视觉上看到多个master但这点在下一次vrrp transition后也会回归正常; 另一个层面是keepalived的问题 - 应该是和这个相关 - https://github.com/acassen/keepalived/commit/e90a633c34fbe6ebbb891aa98bf29ce579b8b45c

另一个问题, 下列脚本因为未在外层for循环中添加sleep会造成这个bug (Bug #1823314 “ha router sometime goes in standby mode in all con...” : Bugs : neutron)描述的问题, 同一个tenant下的所有HA routers的vr_id相同, 这样一台host上就只可能启动这些HA routers中的某一个, 其余的就都启动不了.

#!/bin/bashrouters=$@routers_scrubbed=$(echo $routers | sed -e 's/,/ /g')#routers="87d2302b-77cb-44dc-80cd-7de61edb8482
#9bbb7f94-955e-4f52-be5b-53e502e421be
#b4465e79-2346-46c2-ab32-95f586578ce1
#c51db509-3edb-415f-90c7-1eae0af67bd2
#c6fb1d6f-9ee8-4f8f-b9fe-81b6422c63a5
#cad767e8-c003-49e0-8c7a-d5bcb5c86251"all_l3_agents=$(neutron agent-list -f value | awk '/l3-agent/ {print $1}')for router in ${routers_scrubbed} 
doagents=$(neutron l3-agent-list-hosting-router -f value ${router})agent_count=$(echo "${agents}"| wc -l)active_agents=$(echo "${agents}" | awk '/active/ {print $1}')standby_agents=$(echo "${agents}" | awk '/standby/ {print $1}')active_agents_count=$(echo "${agents}" | grep active | wc -l)echo "Agents: ${agent_count}, ${active_agents_count} active"if [ "${active_agents_count}" -gt "1" ]thenecho "Bad router found, fixing"for bump_agent in $active_agentsdoecho "Bumping l3 agent ${bump_agent} for router ${router}"neutron l3-agent-router-remove "${bump_agent}" "${router}"neutron l3-agent-router-add "${bump_agent}" "${router}"echo "Waiting 3 seconds between router bumping"sleep 3doneelif [ "${active_agents_count}" -lt "1" ]thenecho "Dead router found, fixing"# drop all agents, then add all agentsfor bump_agent in $all_l3_agentsdoecho "Bumping l3 agent ${bump_agent} for router ${router}"neutron l3-agent-router-remove "${bump_agent}" "${router}"neutron l3-agent-router-add "${bump_agent}" "${router}"donefi
done

20190609更新

遇到新问题, 客户反应, 如两个ha gateway (gw1, gw2), 如果gw1是master, gw2是slave, 没问题. 但换过来就出问题, 后来通过抓包:

sudo ip netns exec qrouter-8126dc65-4fbe-4f22-ae77-1e3b7916b085 tcpdump -i any -w out.pcap arp or icmp 
sudo ip netns exec qrouter-8126dc65-4fbe-4f22-ae77-1e3b7916b085 ip n
sudo ovs-tcpdump -i br-int "(arp or icmp) and net 192.168.100.0/24 and ether host fa:16:3e:51:a6:8a" -p ovs_tcpdump_capture.pcap
tshark -r attachments/ovs_tcpdump_port_br-int_00189699.pcap | grep duplicate
sudo ovs-tcpdump -e -i br-int "(arp or icmp)" -w ovs_tcpdump_port_br-int.pcap 
sudo ovs-tcpdump -e -i tapebd2a710-3a "arp or icmp" -w ovs_tcpdump_port_ebd2a710-3a.pcap
watch -n 1 'sudo ovs-ofctl dump-flows br-tun| grep <mac addr of vm with ip 192.168.100.6>'
sudo ip netns exec qrouter-8126dc65-4fbe-4f22-ae77-1e3b7916b085 arping -I qr-ebd2a710-3a 192.168.100.6

发现计算节点上有下列两条流表:

cookie=0xc47b830f360f3102, duration=262107.239s, table=20, n_packets=166506, n_bytes=8374732, idle_age=3, hard_age=65534, priority=2,dl_vlan=4,dl_dst=fa:16:3e:51:a6:8a actions=strip_vlan,load:0x1->NXM_NX_TUN_ID[],output:46 
cookie=0xc47b830f360f3102, duration=2138996.306s, table=20, n_packets=3039, n_bytes=331269, hard_timeout=300, idle_age=65534, hard_age=0, priority=1,vlan_tci=0x0004/0x0fff,dl_dst=fa:16:3e:51:a6:8a actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:48 

显然删去第一条就好了:

ovs-ofctl del-flows br-tun "table=20,priority=2,dl_vlan=4,dl_dst=fa:16:3e:51:a6:8a"

可为什么会这样呢? 这条流表是l2pop里的, 这行被加的, 明明是在non-ha时才加啊 - https://github.com/openstack/neutron/blob/stable/queens/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L323

原因是数据库, 下列记录的device_owner不是network:ha_router_replicated_interface

select device_owner,id,name,network_id,mac_address,device_id from ports where mac_address="fa:16:3e:51:a6:8a" 
select device_owner,id,name,network_id,mac_address,device_id from ports where device_owner="network:ha_router_replicated_interface" device_owner	id	name	network_id	mac_address	device_id
network:router_interface	ebd2a710-3a94-466c-98e8-4b436fbb4b43		591b004e-d640-4df6-bee1-01fc2c38f166	fa:16:3e:51:a6:8a	8126dc65-4fbe-4f22-ae77-1e3b7916b085

解决办法:

openstack router set --disable 8126dc65-4fbe-4f22-ae77-1e3b7916b085 
openstack router set --no-ha 8126dc65-4fbe-4f22-ae77-1e3b7916b085 
openstack router set --ha 8126dc65-4fbe-4f22-ae77-1e3b7916b085 
openstack router set --enable 8126dc65-4fbe-4f22-ae77-1e3b7916b085 

和这个bug有关吗?- Bug #1869887 “L3 DVR ARP population gets incorrect MAC address i...” : Bugs : neutron

20191105更新

又一个相关bug - Bug #1839592 “Open vSwitch (Version 2.9.2) goes into deadlocked ...” : Bugs : openvswitch package : Ubuntu

20200810更新

https://review.opendev.org/#/c/745441/

20210723更新

最开始,l3-agent会自动设置gw port link up by default, 对于HA routers来说,gateway port被plug在所有scheduled hosts,当这些port处理backup时会发IPv6 MLDv2)包,这会让外界误以为gateway port是活着的.这样,master node l3 traffic will be broken
这个patch 设置这些port 为down, 当vrrp设置master状态时才将它设置为up - https://review.opendev.org/c/openstack/neutron/+/717740

这样, l3-agent将会控制这些port的 up or down, 如果l3-agent没有设置up,l3将不会发消息,, 这个patch等待udev问题消失(但实际不work) - https://review.opendev.org/c/openstack/neutron/+/801857

udev问题是:

 Keepalived_vrrp[195525]: Error sending gratuitous ARP on qg-733745b8-b2 for 31.44.216.35
Jul 21 09:03:54 dcs1-clp-nod11 Keepalived_vrrp[195525]: Error sending gratuitous ARP on qg-733745b8-b2 for xxx
Jul 21 09:03:54 dcs1-clp-nod11 Keepalived_vrrp[195525]: VRRP: Error sending ndisc unsolicited neighbour advert on qg-733745b8-b2 for xxx

但通过remote/add操作修复一个router之后,在ping FIP时仍然存在dup问题, 暂不清楚为什么

日志中有:network error network is unreadchable keepalive

上面的问题实际上和udev无关.上面的patch引入之后又产生新的问题,因为它使用了sleep(3)(https://review.opendev.org/c/openstack/neutron/+/801857/1/neutron/agent/linux/interface.py)会造成evelent timeout>3从而导致"systemctl restart neutron-l3-agent"之后一些keepalived进程无法spawn造成keepalived数目和2xrouter数目不等(watch 'pgrep -alf keepalived| grep -v state| wc -l'),keeaplived进程没有spawn, qg-xxx自然就不存在,自然spawn_state_change_monitor进程就找不着qg-xxx

20210927更新 - multiple standby

multiple standby除了上面的appamor或sleep(3)导致keepalived进程起不来外,还可能是由于ha port全是DOWN.

先删除所有这些port之后再运行下列命令会重建,但是status还是DOWN

openstack router set --disable provider-router
openstack router set --no-ha provider-router
openstack router set --ha provider-router
openstack router set --enable provider-router
openstack port list |grep 'HA port'

l3-agent启动的时候是会将所有ha port全设置为DOWN - https://review.opendev.org/c/openstack/neutron/+/470905
l2-agent应该将这些ha port设置为ACTIVE,但发生了下列错误:

2021-09-27 11:23:12.478 6850 ERROR neutron.agent.rpc oslo_messaging.rpc.client.RemoteError: Remote error: InvalidTargetVersion Invalid target version 1.1

这是因为我们只更新了neutron-gateway上的包,而没有更新neutron-server上的包导致rpc不匹配。但更新了neutron-api之外,又报下列错:

Failed to get details for device ...
Table 'neutron.subnet_dns_publish_fixed_ips' doesn't exist

那可能是需要升级数据库吧

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
neutron-db-manage current --verbose
neutron-db-manage upgrade heads

但它又因为缺少/usr/lib/python3/dist-packages/neutron/db/migration/alembic.ini 而报"KeyError: 'formatters'",人工拷贝一个之后又报别的错

20211209更新

这次的问题是o-hm0是DOWN状态的:

  1. 有fe00的ipv6 mgmt地址,只是是DOWN状态。所以neutron-keepalived-state-change应该没问题,它能正常触发active/standby, keepalived应该也没问题因为有IP
  2. 应该是l3-agent的问题造成port没有UP, l3-agent的问题可能是SG太多造成的CPU load
  3. disable HA for mgmt newtork 可能解决问题
  4. 如果只是ipv6方面的问题,也可以switch mght newtork from ipv6 to ipv4

20220221更新

某些l3ha router port是DOWN状态的, 看到它的tag是4095

var/log/neutron/neutron-openvswitch-agent.log:2022-02-07 19:55:51.305 31044 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e3f5d2aa-a151-449c-b609-e657488f81ef - - - - -] Port 'tape6bf92da-ce' has lost its vlan tag '6'! Current vlan tag on this port is '4095'.

还看到这种日志:

var/log/neutron/neutron-openvswitch-agent.log.1:2022-02-07 04:51:14.374 31044 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e3f5d2aa-a151-449c-b609-e657488f81ef - - - - -] VIF port: 8e12c287-3ed9-46b2-a2d6-bd41de86e0a4 admin state up disabled, putting on the dead VLAN

回顾本文提到的l3ha常见的bug:

  • l2-agent创建ha port(169打头那port叫ha port)
  • l3-agent初始设置ha port=down (l3-agent心跳失败触发AGENT_REVIVED会重新初始化设置ha port=down), 也设置gateway port=down
  • l2-agent应该将ha port设置为ACTIVE(有因为只更新了neutron-gateway的包,而没更新neturon-server导致rpc不匹配,从而导致multiple ha port DOWN问题)
  • l3-agent应该将gateway port设置为UP (也有因为SG太多CPU load太大造成gateway port无法UP的问题 )
  • 三个l3-agent为ha port enable keepalived, 这样master节点就会有IP变动 (可能因apparmor问题导致keepalived无法更新而出现multiple standby, 也有因为greenthread中错误引入了sleep而导致无法spawn keepalived进程的)
  • keealived进程被spawn后会产生qg-xxx接口
  • neutron-keepalived-state-change应先于keepavlied进程启动,它通过ip monitor监控到IP变动之后将master/slave状态经MQ写入DB (有时候因MQ错误状态没写成功,但IP变化不会再次触发就会出现multiple master问题,ha_vrrp_health_check_interval会定期ping external gateway, ping失败就说重新选master

根据https://review.opendev.org/c/openstack/neutron/+/364407, l2-agent与l3-agent的关系是:

  • l2-agent通过neutron-api设置port=ACTIVE
  • l3-agent通过MQ监控port update event, 然后再经routers_updated_on_host触发l3的routers_updated
  • l3-agent在routers_updated的process中检查确认port=ACTIVE后再spawn keepalived

此处的l2-agent与l3-agent在1.15号升级重启后的顺序没有错,所以这是一个单纯的l2-agent没有更新port=ACTIVE的问题:

* neutron-l3-agent.service - OpenStack Neutron L3 agentActive: active (running) since Sat 2022-01-15 07:36:17 UTC; 3 weeks 2 days ago
* neutron-openvswitch-agent.service - Openstack Neutron Open vSwitch Plugin AgentActive: active (running) since Sat 2022-01-15 07:35:49 UTC; 3 weeks 2 days ago
* neutron-server.service - OpenStack Neutron ServerActive: active (running) since Sat 2022-01-15 07:41:54 UTC; 3 weeks 6 days ago

l3-agent通过此路径(sync_routers -> _ensure_host_set_on_ports -> update_port -> _bind_port_if_needed -> _bind_port )也能修复binding_failed状态的port

但是'openstack port show' 显示port并不是failed状态的。

检查var/lib/neutron/ha_confs/8f902480-7664-4921-99ee-452ea09a5246/neutron-keepalived-state-change.log发现日志到2022-01-15 07:41:34之后就再也没有日志了。

2022-01-15 07:41:34.992 6935 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-8f902480-7664-4921-99ee-452ea09a5246', 'arping', '-A', '-I', 'qr-8e12c287-3e', '-c', '1', '-w', '1.5', '10.147.192.1'] create_process /usr/lib/python3/dist-packages/neutron/agent/linux/utils.py:87

2022-01-15 07:41:34刚才是升级neutron-api的时间,所以有理由表明和升级有关。

有三个ha节点(050, 051, 052),051在1.15号之后是master, 但var/lib/neutron/ha_confs/8f902480-7664-4921-99ee-452ea09a5246/neutron-keepalived-state-change.log显示在1.15号升级之前052是master

这样在1.15号,在052上:

  • neutron-l3-agent与openvswitch-agent在07:35左右升级
  • 在neutron-api升级在07:41左右, 051与052上有么一个错:2022-01-15 07:36:02.604 1853 ERROR neutron AssertionError: do not call blocking functions from the mainloop, 这样造成了keealived重新选举
  • 这时选中了051, 051之前是standby节点所以它的router port是down, 这样造成ha_check_script_244.sh不work无法启动.

根据(OpenStack upgrade — charm-deployment-guide 0.0.1.dev429 documentation)升级时应该是先升级neutron-api, 再升级agent,似乎是搞反了.

另外一个测试方法(与它无关,是上面upgrade造成的):

./generate-bundle.sh -s bionic -r stein  -n vrrp:stsstack --use-stable-charms --l3ha --num-compute 3 --run
./configurecat << EOF | tee /tmp/script.sh
#!/bin/bash 
function create_net_struct()
{neutron router-create scale-test-router-\${1}neutron net-create scale-test-net-\${1}neutron subnet-create --name scale-test-subnet-\${1} scale-test-net-\${1} 172.16.\${1}.0/24neutron router-gateway-set scale-test-router-\${1} ext_netneutron router-interface-add scale-test-router-\${1} subnet=scale-test-subnet-\${1}
}
create_net_struct \$1
EOF
chmod +x /tmp/script.sh
for i in {1..100};do /tmp/script.sh $i;done
openstack port list |grep 172.16

20220223 - 关于udev的问题

关于udev相关的问题多说一点,

对于bug (https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1927868), 16.4.0只是revert了这个patch(https://opendev.org/openstack/neutron/commit/b9073d7c3bc9d1d9254294d698a6fae319fd22ad), 但它之前的bug还是存在的(https://bugs.launchpad.net/neutron/+bug/1916024), 即l3-agent给router link设置UP之后再spwan keealived,但在设置UP时找不着qg-xxx, 好像udev先消失后创建了。
之前multipathd似乎也有类似的udev问题(https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1621340)
看下面的(https://review.opendev.org/c/openstack/neutron/+/801857/1/neutron/agent/linux/interface.py#b322)日志 qg-cd4b6b7d-b0设置down之后怎么就消失了
Feb 16 13:03:20 nod9 neutron-keepalived-state-change[2313970]: 2022-02-16 03:03:20.638 2313970 INFO neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 31.44.218.122 on qg-cd4b6b7d-b0 in namespace qrouter-321b8961-0893-4c23-b843-22e8f96fae1d: Exit code: 2; Stdin: ; Stdout: Interface "qg-cd4b6b7d-b0" is down
Feb 16 13:03:20 nod9 neutron-keepalived-state-change[2313970]: 2022-02-16 03:03:20.639 2313970 INFO neutron.agent.linux.ip_lib [-] Interface qg-cd4b6b7d-b0 or address 31.44.218.122 in namespace qrouter-321b8961-0893-4c23-b843-22e8f96fae1d was deleted concurrently根据解释(https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1927868/comments/68),造成这个问题的根本原因是set_link_status中(https://github.com/openstack/neutron/blob/d8f1f1118d3cde0b5264220836a250f14687893e/neutron/agent/linux/interface.py#L328)添加了timeout触发了底层oslo_privsep的一个bug, 其实应该使用带timeout特性的PrivContext来实现这种超时等待(https://review.opendev.org/c/openstack/oslo.privsep/+/794993/5/doc/source/user/index.rst#45)provisioning_blocks主要用于compute port去通过neutron已经完成网络设置,这个nova能继续power-on虚机, 但是neutron没有检查device_owner导致即使这个port是neutron自己不是给nova用的(如internal service ports,  router_gateway, router_interface and dhcp)也有provisioning_blocks属性, 这样导致provisioning_block可能会cause some timeout for resource setup. 见:https://bugs.launchpad.net/neutron/+bug/1930432
1, nova创建port
2, ml2 mechinism driver为port插入provisioning_blocks
3, nova plugs the device
4, l2-agent为port设置flow, 并且call update_device_list to neutron-server
5, neutron-server设置port status to ACTIVE
6, neutron-server通过vif-plugged成功事件给nova不应该为 router_gateway, router_interface and dhcp这种port提供provisioning_blocks属性, 
1, neutron l3/dhcp/x service plugin创建port
2, no provisioning_blocks
3, l3/dhcp/x agent plug the port
4, l2-agent为port设置flow, 并且call update_device_list to neutron-server
5, neutron-server设置port status to ACTIVE如果这种neutron internal port有provisioning_blocks属性,它们就永远不会成ACTIVE状态可能会造成别的问题。
一种处理方法是是在oslo_privsep中设置timeout - https://bugs.launchpad.net/neutron/+bug/1930401
It add timeout to PrivContext and entrypoint_with_timeout decorator1. nova creates port
2. the 'openvswitch' mechinism driver inserts provisioning_block for this port
3. nova calls related interface to plug the device
4. ovs-agent sets the flows for the port and call update_device_list to neutron-server
5. neutron-server try to set port status to ACTIVE
6. neutron-server notify nova that "vif-plugged" successThis works fine for VM with its ports. But for neutron service port, like router_gateway, router_interface and dhcp,
it is unnecessary. Because there is no dependency among neutron resources. Neutron just knows that the ports
had been set properly. And another thing is, for most of these internal service port, there is no security group/port security.

似乎也并不是udev的问题,感觉有一种场景理论上能出现这个问题,master上的两个进程同时死它无法更新状态到standby, 另外的节点会升级成master, 这就出现了multiple master, 然后重启l3-agent会先启动neutron-keepalived-state-change它又会根据master设置qg-xxx to UP, 但此时qg-xxx却不存在,因为qg-xxx是keealived创建的它还没启动嘛.

所以感觉现在的代码在老master上重启l3-agent就会重现这个问题,还有有点问题,因为停止老master上的l3-agent并不会删除qg-xxx

  •   先同时停止两个进程让它成multiple master,  然后在别的操作(可能升级或删除neutron-l3-agent?)导致之前qg-xxx就不存在? 这种情况可能性很低
  • l3-agent是保证先启动keepalived, 但并不保证keealived在创建qg-xxx的时间上就一定比neutron-keepalived-state-change在检查qg-xxx是否存在时在前啊?所以说还是得在检查qg-xxx时加timeout检查,底层olso会自动添加timeout吗(https://review.opendev.org/c/openstack/oslo.privsep/+/794993/5/doc/source/user/index.rst#45)之前使用sleep来添加timeout肯定会造成oslo bug的(https://github.com/openstack/neutron/blob/d8f1f1118d3cde0b5264220836a250f14687893e/neutron/agent/linux/interface.py#L328),y
I feel there is one possiblity to cause this problem in theory.Normally, l3-agent starts neutron-keepalived-state-change and keepalived, and l3-agent can ensure that neutron-keepalived-state-change starts before keepalived.l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_added_router -> ha_router#initialize()
l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_updated_router -> ha_router#process()but these two processes suddenly die, external_process.py#_respawn_action will restart them, but there is no way to guarantee their start-up sequence, so keepalived may start before neutron-keepalived-state-changeSuppose there are two nodes, node1 is the master node and node2 is the standby node at initial time.1, Two process suddenly die on node1 so qg-xxx interface will be deleted, and node2 becomes the master node2, on node1, restart l3-agent, l3-agent was dead so two processes were die at the same, so neutron-keepalived-state-change doesn't have change to set DB to standby, two masters appear at this time.3, on node1, neutron-keepalived-state-change starts before keepalived normally this time, neutron-keepalived-state-change starts will set qg-xxx link to UP, but at this time qg-xxx may not exist because keepalived hasn't started yet.keepalived_state_change.py -> L3AgentKeepalivedStateChangeServer#run -> KeepalivedStateChangeHandler#enqueue -> enqueue_state_change -> ha#_enqueue_state_change -> ha_router#set_external_gw_port_link_status

对于上面第2种情况,是要将之前的错误fix ( https://github.com/openstack/neutron/blob/d8f1f1118d3cde0b5264220836a250f14687893e/neutron/agent/linux/interface.py#L328)修改成下列代码来使用oslo的timeout特性吗?

https://review.opendev.org/c/openstack/oslo.privsep/+/794993/5/doc/source/user/index.rst#41
$ git diff
diff --git a/neutron/privileged/__init__.py b/neutron/privileged/__init__.py
index 296dba5c77..1c7091032a 100644
--- a/neutron/privileged/__init__.py
+++ b/neutron/privileged/__init__.py
@@ -27,4 +27,5 @@ default = priv_context.PrivContext(caps.CAP_DAC_OVERRIDE,caps.CAP_DAC_READ_SEARCH,caps.CAP_SYS_PTRACE],
+    timeout=5)

更多的思考总结:

1, neutron-server, neutron L3/DHCP/X service plugin创建router port in DB(这个port不是compute/vm port所以ml2 mechinism driver不会为port插入provisioning_blocks), 并默认设置HA_port=DOWN - https://review.opendev.org/c/openstack/neutron/+/4709052, l2-agent scheudle 3个HA_PORT到3个节点并设置好port binding3, l3-agent启动neutron-keepalived-state-change进程l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_added_router -> ha_router#initialize()4, L3/DHCP/X agent plug port(also create qg-xxx interface) in RouterInfo#processl3-agent继续调用router_info#process, 最终在external_gateway_added将创建qg-xxx,并根据是否是master来设置qg-xxx link UP or DOWNhttps://review.opendev.org/c/openstack/neutron/+/717740/2/neutron/agent/l3/ha_router.py#502router_info#process -> process_external -> _process_external_gateway -> ha_router#external_gateway_added -> ha_router#_plug_external_gatewayprocess_external在router_info与dvr_local_router均有实现,我们只写了router_info的路径external_gateway_added在router_info, dvr_edge_ha_router, dvr_edge_router, dvr_local_router, ha_router中均有实现, 我们只写了ha_router的路径5, l2-agent为port设置flow, 并且call update_device_list to neutron-server6, neutron-server设置port status to ACTIVE7, l3-agent在HaRouter#process中启动keepalived进程l3_agent#_process_routers_loop -> _process_router_update -> _process_router_if_compatible -> _process_updated_router -> ha_router#process()l3-agent通过MQ监控port update event, 然后再经routers_updated_on_host触发l3的routers_updated, 而从l3-agent只会在HA_PORT=ACTIVE时才spawn keepalived - https://review.opendev.org/c/openstack/neutron/+/357458/9/neutron/agent/l3/ha_router.py注意:HaRouter是RouterInfo的子类, 下列代码本来是想先创建qg-xxx后启动keepalived, 但此处会有竞争条件发生吗?
508     def process(self):                                                          
509         super(HaRouter, self).process()                                                                                                            
511         self.set_ha_port()                                                      
512         LOG.debug("Processing HA router with HA port: %s", self.ha_port)        
513         if (self.ha_port and                                                    
514                 self.ha_port['status'] == n_consts.PORT_STATUS_ACTIVE):         
515             self.enable_keepalived() 8, keeplived设置VIP, qg-xxx会由l3-agent在3个节点上都会创建,将keeplived会在一个master节点上设置VIP9, keepalived_state_change设置qg-xxx=UPkeepalived_state_change初始发现有VIP就设置为master, 然后通过IP monitor监控到VIP变化之后需将qg-xxx设置为UP, 此处有时发生找不着qg-xxx的错误 - https://bugs.launchpad.net/neutron/+bug/1916024keepalived_state_change.py -> L3AgentKeepalivedStateChangeServer#run -> KeepalivedStateChangeHandler#enqueue -> enqueue_state_change -> ha#_enqueue_state_change -> ha_router#set_external_gw_port_link_status综上所述:
1, 代码会保证在l2-agent设置HA_PORT=ACTIVE之后,再由l3-agent来启动keepalived进程
2, 但是keepalived_state_change检测到了变化,就会去设置qg-xxx=UP, 但此时qg-xxx有可能不存在。a, l3-agent初始启动时,无qg-xxx, l3-agent启动后创建qg-xxx与neutron-keepalived-state-change设置qg-xxx这个有竞争吗?可为何keepalived要恰好在此节点设置VIP呢?b, 第二次启动l3-agent, 之后即使停止l3-agent再重启也并不会删除qg-xxx啊这个代码中的sleep会引入olso的另一个bug - https://github.com/openstack/neutron/blob/d8f1f1118d3cde0b5264220836a250f14687893e/neutron/agent/linux/interface.py#L328这个代码之前是下列代码,qg-xxx不存在就返回,感觉也正常。
325     def set_link_status(self, device_name, namespace=None, link_up=True):                                                                       
326         ns_dev = ip_lib.IPWrapper(namespace=namespace).device(device_name)      
327         if not ns_dev.exists():                                                 
328             LOG.debug("Device %s may concurrently be deleted.", device_name)    
329             return                                                              
330         if link_up:                                                             
331             ns_dev.link.set_up()                                                
332         else:                                                                   
333             ns_dev.link.set_down()下列backup/master一直持续切换之后neutron-l3-agent就变成mask了,另外var/lib/neutron/ha_confs/为空说明keealived一直没有设置VIP成功
2022-02-16 02:52:28.739 2233129 INFO neutron.agent.l3.ha [-] Router 321b8961-0893-4c23-b843-22e8f96fae1d transitioned to backup on agent nod9
2022-02-16 02:52:28.740 2233129 INFO neutron.agent.l3.ha_router [-] Set router 321b8961-0893-4c23-b843-22e8f96fae1d gateway device link state to down.
...
2022-02-16 03:03:22.519 2233129 INFO neutron.agent.l3.ha [-] Router 321b8961-0893-4c23-b843-22e8f96fae1d transitioned to master on agent nod9
2022-02-16 03:03:22.520 2233129 INFO neutron.agent.l3.ha_router [-] Set router 321b8961-0893-4c23-b843-22e8f96fae1d gateway device link state to up.
为什么会一直持续切换,因为keepalived_state_change在将它变成master设置qg-xxx=UP时遇到了找不着qg-xxx的错误, 这样造成keepalived反复切换。
Feb 16 13:03:20 nod9 neutron-keepalived-state-change[2313970]: 2022-02-16 03:03:20.638 2313970 INFO neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 31.44.218.122 on qg-cd4b6b7d-b0 in namespace qrouter-321b8961-0893-4c23-b843-22e8f96fae1d: Exit code: 2; Stdin: ; Stdout: Interface "qg-cd4b6b7d-b0" is down
Feb 16 13:03:20 nod9 neutron-keepalived-state-change[2313970]: 2022-02-16 03:03:20.639 2313970 INFO neutron.agent.linux.ip_lib [-] Interface qg-cd4b6b7d-b0 or address 31.44.218.122 in namespace qrouter-321b8961-0893-4c23-b843-22e8f96fae1d was deleted concurrently
此时,实际这个router在另外一个nod11节点上是正常的吗?既然nod11是正常的,为什么要将nod9升为master呢?
1, nod9的版本是keepalived=1:2.0.19-2ubuntu0.1, nod11未知,版本若不同会引起切换吗?
2, 在nod9上升级firmwall出现的此问题,'neutron l3-agent-list-hosting-router $ROUTER'显示均为active (同时停止两个进程产在DB中产生multiple active, 另外同时升级firmwall会删除qg-xxx么?)另外一个环境(见上面20220221更新),版本是一样的,但是两个进程都是正常启动的,没搜到"gateway device link state"这种日志,说明keepalived_state_change没有监控到VIP变化.
ha_check_script_244.sh脚本报错,但也没引发切换.
l3-agent只会在HA_PORT=ACTIVE时才spawn keepalived, 而此处PORT=DOWN,所以和后面的keepalived与keepalived_state_change都没关系,应该只和l2-agent有关系。
l3-agent在l2-agent将PORT设置为ACTIVE后就开始工作啦,此路径也能再调调用l2-agent来修改port binding问题(sync_routers -> _ensure_host_set_on_ports -> update_port -> _bind_port_if_needed -> _bind_port),这里的port binding看起来没问题,仅是PORT=DOWN
为什么PORT=DOWN,发现ha router port被设置了dead_port(4095)了?
a, 导致上面的第5步l2-agent没能call update_device_list to neutron-server
b, 或者由于ha port没有去掉provisioning_blocks导致它永远无法ACTIVE - https://bugs.launchpad.net/neutron/+bug/1930432
c, 但这里(见上面20220221更新),好像是由升级顺序不当造成的,应该先升neutron-api再升agent,结果后升neutron-api时会触发keealived重新选举, 选举了之前的standby为master可能下列的能解决:
juju config neutron-gateway keepalived-healthcheck-interval=0 
juju run-action neutron-gateway/1 show-deferred-events --wait

真实原因:
在keepalived在一个节点上设置VIP转成master之后(在keepalived发出GARP之后),该节点上的neutron-keepalived-state-change将监控到这个VIP变化然后设置qg-xxx=UP.
但如果MQ或者system load很重的话,导致路由更新慢,将导致keepalived设置VIP到neutron-keepalived-state-change设置qg-xxx=UP的时间大于keepalived的超时时间,这样会造成keepalived继续将VIP迁移到别的节点.所以需要改变下列两个值:
vrrp_garp_master_delay - set to 60
vrrp_garp_master_repeat - defaults to 5

或者另一种solution是不设置qg-xxx为DOWN, 之前设置为DOWN呢是因为: https://bugs.launchpad.net/neutron/+bug/1859832
neutron的这个问题是因为standby nodes也发MLDv2 IPv6 packages, keepalived(>2.0.19)似乎已经修复这个问题 - https://github.com/acassen/keepalived/commit/b10bbfc2a2b216487cea5a586c55765275e41253
revert这个neutron patch的话,并且又不是IPv6环境,似乎问题不大.

upstream也在讨论这个问题(keepalived将一个节点设置VIP之后应该发GARPs但是因为qg-xxx是DOWN而失败) - https://bugs.launchpad.net/neutron/+bug/1952907
upstream在讨论是否revert patch for 1859832 - https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-03-04-14.03.log.txt

但revert这两个commits ( c52029c39aa824a67095fbbf9e59eff769d92587 and 5e3c8640b0412a98f2aa8f9ae710ffb7e6d0fcc5 )之后对于新添加的router全是standby(对存在的routers没有问题),那是因为要在:l3_agent#_router_added -> ha#initialize -> (first set_ha_port then _init_keepalived_manager) 中添加:

self.keepalived_manager.get_full_config_file_path('test')

但这个revert patch实际上没有合并, 见: Bug #1965297 “l3ha don't set backup qg ports down” : Bugs : neutron

现在社区在尝试合并这个 - ??????https://review.opendev.org/c/openstack/neutron/+/839671

之前, 最好使用non-ha作为workaround, 当然,也可尝试批准将revert patch作为测试ppa ( not hotfix)

20220507更新

又有一个客户遇到了multiple standby问题,新创建的router在gw上snat-xx与qrouter-xxx均不存枯.
建议他们用non-ha routers as the workaround, 但它们做了之后发现VM works, but octavia VM not work.1, neutron-server creates 3 HA ports
2, l2-agent binds 3 HA ports to 3 gw nodes
3, l3-agent starts neutron-keepalived-state-change process
4, l3-agent sets the initial qg-xxx state to DOWN
5, l2-agents sets the flow for HA interface, and calls update_device_list to neutron-server
6, neutron-server sets the status of HA port to ACTIVE
7, l3-agent starts keepalived process only after the status of HA port is ACTIVE
8, keepalived process selects one gw node as master node, and set VIP on it
9, neutron-keepalived-state-change sets qg-xxx=UP after monitoring VIP change根据上面代码流程感觉问题出在第6步的可能性也很大,所以在neutron-server发现了这个错(Timeout in RPC method update_all_ha_network_port_statuses).
接着建议设置: juju config neutron-api rpc-response-timeout=600$ grep -r 'rpc_response_timeout' neutron-l3-agent.log |grep 600
2022-05-06 22:32:20.741 59686 DEBUG neutron.wsgi [-] rpc_response_timeout = 600 log_opt_values /usr/lib/python3/dist-packages/oslo_config/cfg.py:2581$ grep -r 'L3 agent started' gw*/neutron-l3-agent.log
gw7/neutron-l3-agent.log:2022-05-06 22:37:52.966 31491 INFO neutron.agent.l3.agent [-] L3 agent started
gw8/neutron-l3-agent.log:2022-05-06 22:35:45.324 8438 INFO neutron.agent.l3.agent [-] L3 agent started
gw9/neutron-l3-agent.log:2022-05-06 13:31:50.922 52584 INFO neutron.agent.l3.agent [-] L3 agent started
gw9/neutron-l3-agent.log:2022-05-06 15:59:01.750 45839 INFO neutron.agent.l3.agent [-] L3 agent started
gw9/neutron-l3-agent.log:2022-05-06 16:31:11.984 28376 INFO neutron.agent.l3.agent [-] L3 agent started
gw9/neutron-l3-agent.log:2022-05-06 22:36:51.459 59686 INFO neutron.agent.l3.agent [-] L3 agent started2022-05-06 22:35:36.588 9904 WARNING neutron.scheduler.l3_agent_scheduler [req-1cfcff17-e233-48ce-a6ef-51bd4a13047e 03f4f14a022844ab8571942c5f178668 e2e18f75236d4af88eacfdf099cd694c - da40abec78bd4451ae577ac4df47767d da40abec78bd4451ae577ac4df47767d] No L3 agents can host the router 448fe765-3da3-4cbd-b0fb-4e33b968fb40
2022-05-06 22:35:36.959 9904 DEBUG neutron.db.l3_dvr_db [req-1cfcff17-e233-48ce-a6ef-51bd4a13047e 03f4f14a022844ab8571942c5f178668 e2e18f75236d4af88eacfdf099cd694c - da40abec78bd4451ae577ac4df47767d da40abec78bd4451ae577ac4df47767d] SNAT interface ports not created: 448fe765-3da3-4cbd-b0fb-4e33b968fb40 _create_snat_interfaces_after_change /usr/lib/python3/dist-packages/neutron/db/l3_dvr_db.py:254但是'juju config neutron-api rpc-response-timeout=600'这个命令会造成neutron-l3-agent重启.
这个参数设置在  2022-05-06 22:32:20.741,
它造成最后重启在 2022-05-06 22:36:51.459
但是客户新建的测试router在 2022-05-06 22:35:36.588

仍然会看到这种错误:

2022-05-07 09:19:38.986 58153 DEBUG neutron.db.l3_agentschedulers_db [req-6ff1d856-6760-44bd-b753-0eb1123e0b25 - - - - -] Agent requested router IDs not scheduled to it. Scheduled: []. Unscheduled: {'8294bbe7-0816-4201-8010-29143cc4ad33'}. Agent: Agent(admin_state_up=True,agent_type='L3 agent',availability_zone='AZ1',binary='neutron-l3-agent',configurations={"agent_mode": "dvr", "ex_gw_ports": 75, "extensions": ["fwaas_v2"], "floating_ips": 11, "handle_internal_only_routers": true, "interface_driver": "openvswitch", "interfaces": 147, "log_agent_heartbeats": false, "routers": 77},created_at=2020-06-26T16:24:04Z,description=<?>,heartbeat_timestamp=2022-05-07T09:19:35Z,host='sgdemr0114bp026.fra-prod1.lightning.corpintra.net',id=a44044fd-05bb-432d-8fac-04cd5da5f77d,load=0,resource_versions=<?>,resources_synced=<?>,started_at=2022-05-06T22:34:03Z,topic='l3_agent'). list_active_sync_routers_on_active_l3_agent /usr/lib/python3/dist-packages/neutron/db/l3_agentschedulers_db.py:357

但最后居然在从16.4.1升级到16.4.2之后并重启l3-agent就好了

$ git cherry -v 16.4.1 16.4.2 | grep -v -i ovn
warning: refname '16.4.1' is ambiguous.
+ 13275402cee2862f8db2f9179680dd34ca719a51 Implement namespace creation method
+ 463083c71387b249b9aa834e126a17a1a0f3a189 [L3] Use processing queue for network update events
+ 3ebe727ae32aac87b4a54f0d2ed09d0429cb06fb Skip FIP check if VALIDATE_MIGRATION is not True
+ 78ff22e6bb3254859df5c90d41e31b84c70ab5b5 Ensure net dict has provider info on precommit delete
+ cb28ea06d1da02a20fd7d595fbbf755f49f992f4 Make test_throttler happy
+ 16a2fe772234c70b661add0e9cf3f84bbff67840 Randomize segmentation ID assignation
+ 9b0f09456451744592a7d43fe02645a91e695dca VLAN "allocate_partially_specified_segment" can return any physnet
+ 1327a543370abcedcbee75d4653987fa27f34c83 Fix neutron_pg_drop-related startup issues
+ c45f0fd4bce12d9ee3ef07c1ce5d574d3308959f Revert "[L3][HA] Retry when setting HA router GW status."
+ 7eaa84a0cd48adb2ecd7653640d32aea8096e786 Populate self.floating_ips_dict using "ip rule" information
+ ebdf7c9f65594cefc558adad860d36b88e4a9a69 [Functional] Wait for the initial state of ha router before test
+ 707afe70125dc707e870e8b8ee6bc340b35cdf2b Delete SG log entries when SG is deleted
+ 41ed4d7f43bc1747416ddf636afd730b22c786d8 Remove dhcp_extra_opt name after first newline character
+ d0cf4638f595c38a67974a45d5ef76dcf34e8918 [DVR] Set arp entries only for single IPs given as allowed addr pair
+ ba80c87ceed0c7b5ab3f10d509f542f99d4e1a15 Fix "_sync_metadata_ports" with no DHCP subnets
+ 1cbb2d83d17e6cdb91140329b25d1fda72db74a3 [DVR] Check if SNAT iptables manager is initialized
+ 384f2bb2aa85e33daad8b4fe0952fd18b96f10b8 [DVR] Fix update of the MTU in the SNAT namespace
+ 226367eed1148c6e555dcb8087f11b1f85a8eddc Delete log entries when SG or port is deleted
+ 11fe2bff17d8ad27c0c49037fcd36203f9baa32d Don't setup bridge controller if it is already set
+ f144ba95a59b156f41e4fd6333349b381251f79f Check a namespace existence by checking only its own directory
+ 3c99c719d08242f1ced941407d19d09d9d5add1e [DVR] Fix update of the MTU in the DVR HA routers

  相关解决方案