当前位置: 代码迷 >> 综合 >> Flannel 安装部署
  详细解决方案

Flannel 安装部署

热度:4   发布时间:2024-01-26 05:07:14.0

Flannel

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。这次的分享内容将从Flannel的介绍、工作原理及安装和配置三方面来介绍这个工具的使用方法。
Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。
Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

Flannel原理

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。但在默认的Docker配置中,每个Node的Docker服务会分别负责所在节点容器的IP分配。Node内部得容器之间可以相互访问,但是跨主机(Node)网络相互间是不能通信。Flannel设计目的就是为集群中所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得"同属一个内网"且"不重复的"IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。
Flannel 使用etcd存储配置数据和子网分配信息。flannel 启动之后,后台进程首先检索配置和正在使用的子网列表,然后选择一个可用的子网,然后尝试去注册它。etcd也存储这个每个主机对应的ip。flannel 使用etcd的watch机制监视/coreos.com/network/subnets下面所有元素的变化信息,并且根据它来维护一个路由表。为了提高性能,flannel优化了Universal TAP/TUN设备,对TUN和UDP之间的ip分片做了代理。
在Flannel的GitHub页面有如下的一张原理图:

packet-01.png


对上图的简单说明 (Flannel的工作原理可以解释如下):

 

  1. 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。
  2. Flannel通过Etcd服务维护了一张节点间的路由表,该张表里保存了各个节点主机的子网网段信息。
  3. 源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一样的由docker0路由到达目标容器。

除了UDP,Flannel还支持很多其他的Backend:

  • udp:使用用户态udp封装,默认使用8285端口。由于是在用户态封装和解包,性能上有较大的损失
  • vxlan:vxlan封装,需要配置VNI,Port(默认8472)和GBP
  • host-gw:直接路由的方式,将容器网络的路由信息直接更新到主机的路由表中,仅适用于二层直接可达的网络
  • aws-vpc:使用 Amazon VPC route table 创建路由,适用于AWS上运行的容器
  • gce:使用Google Compute Engine Network创建路由,所有instance需要开启IP forwarding,适用于GCE上运行的容器
  • ali-vpc:使用阿里云VPC route table 创建路由,适用于阿里云上运行的容器

官方文档

https://github.com/coreos/flannel

下载地址

https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

环境准备

flannel只需要在Node节点安装,Master节点无需安装

操作系统 IP 地址 HostName
CentOS7.x-86_x64 10.0.52.14 k8s.node1
CentOS7.x-86_x64 10.0.52.6 k8s.node2

Flannel 安装

etcd注册网段,供flanneld使用

 

[root@k8s ~]# /opt/etcd/bin/etcdctl \
> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
> --endpoints="https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379" \
> set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@k8s ~]# 

解压缩

  • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥;
  • flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth0 接口;
  • flanneld 运行时需要 root 权限;

 

[root@k8s ~]# mkdir flannel
[root@k8s ~]# tar zxf flannel-v0.11.0-linux-amd64.tar.gz -C ./flannel
[root@k8s ~]# ls
anaconda-ks.cfg  flannel  flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s ~]# cd flannel
[root@k8s flannel]# ls
flanneld  mk-docker-opts.sh  README.md
[root@k8s flannel]# 

安装

 

[root@k8s flannel]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@k8s flannel]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
[root@k8s flannel]# ls  /opt/kubernetes/bin/
flanneld  mk-docker-opts.sh
[root@k8s flannel]# 

配置Flannel

 

cat << EOF | tee /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem \
-etcd-prefix=/coreos.com/network"
EOF

创建flanneld systemd启动管理文件

 

cat << EOF | tee /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

启动服务

 

[root@k8s flannel]# systemctl daemon-reload
[root@k8s flannel]# systemctl start flanneld
[root@k8s flannel]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s flannel]# ps -ef |grep flanneld
root     13688     1  0 13:21 ?        00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem -etcd-prefix=/coreos.com/network
root     13794 12107  0 13:21 pts/0    00:00:00 grep --color=auto flanneld
[root@k8s flannel]# 

配置Docker启动指定子网

 

cat << EOF | tee /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl restart docker

将flannel得相关 文件到所有节点

 

10.0.52.6 节点执行:
scp -r /opt/kubernetes root@10.0.52.14:/opt/
scp /usr/lib/systemd/system/docker.service  root@10.0.52.14:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/flanneld.service  root@10.0.52.14:/usr/lib/systemd/system/flanneld.service10.0.52.14节点执行:
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

验证服务

查看 /run/flannel/subnet.env 得到flannel为docker分配得网段为--bip=172.17.100.1/24

 

[root@k8s flannel]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.100.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.100.1/24 --ip-masq=false --mtu=1450"
[root@k8s flannel]# 

ifconfig 查看docker0得IP为172.17.100.1 ,flannel.1得IP为172.17.100.0,意味着其他node发给本node上得容器包都会被flannel.1捕获,并解分包转发给docker0,docker0再和本node上得容器进行通信

 

[root@k8s docker]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500inet 172.17.100.1  netmask 255.255.255.0  broadcast 172.17.100.255ether 02:42:52:03:df:a7  txqueuelen 0  (Ethernet)RX packets 0  bytes 0 (0.0 B)RX errors 0  dropped 0  overruns 0  frame 0TX packets 0  bytes 0 (0.0 B)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 10.0.52.6  netmask 255.255.255.0  broadcast 10.0.52.255inet6 fe80::5c0e:d0d1:f594:8dcf  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:8c:22:cd  txqueuelen 1000  (Ethernet)RX packets 8299777  bytes 1626481176 (1.5 GiB)RX errors 0  dropped 25  overruns 0  frame 0TX packets 8437318  bytes 1462496730 (1.3 GiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450inet 172.17.100.0  netmask 255.255.255.255  broadcast 0.0.0.0inet6 fe80::9cb7:83ff:fe66:9823  prefixlen 64  scopeid 0x20<link>ether 9e:b7:83:66:98:23  txqueuelen 0  (Ethernet)RX packets 0  bytes 0 (0.0 B)RX errors 0  dropped 0  overruns 0  frame 0TX packets 0  bytes 0 (0.0 B)TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 127.0.0.1  netmask 255.0.0.0inet6 ::1  prefixlen 128  scopeid 0x10<host>loop  txqueuelen 1  (Local Loopback)RX packets 30618  bytes 36997134 (35.2 MiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 30618  bytes 36997134 (35.2 MiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

测试两个node中容器跨网络通信 可以看到不同node节点的容器之间可以正常通信

 

node1 中执行:
[root@k8s cfg]# docker run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
53071b97a884: Pull complete 
Digest: sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d
Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:05:02  inet addr:172.17.5.2  Bcast:172.17.5.255  Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1RX packets:16 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KiB)  TX bytes:0 (0.0 B)lo        Link encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0UP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)/ # ping 172.17.100.2
PING 172.17.100.2 (172.17.100.2): 56 data bytes
64 bytes from 172.17.100.2: seq=0 ttl=62 time=0.398 ms
64 bytes from 172.17.100.2: seq=1 ttl=62 time=0.233 ms
64 bytes from 172.17.100.2: seq=2 ttl=62 time=0.232 ms
64 bytes from 172.17.100.2: seq=3 ttl=62 time=0.237 ms
64 bytes from 172.17.100.2: seq=4 ttl=62 time=0.246 ms
64 bytes from 172.17.100.2: seq=5 ttl=62 time=0.229 ms
64 bytes from 172.17.100.2: seq=6 ttl=62 time=0.246 ms
64 bytes from 172.17.100.2: seq=7 ttl=62 time=0.236 ms
^C
--- 172.17.100.2 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max = 0.229/0.257/0.398 ms
/ # node2 中执行:
[root@k8s docker]# docker run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
53071b97a884: Pull complete 
Digest: sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d
Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:64:02  inet addr:172.17.100.2  Bcast:172.17.100.255  Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1RX packets:16 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KiB)  TX bytes:0 (0.0 B)lo        Link encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0UP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)/ # ping 172.17.5.2
PING 172.17.5.2 (172.17.5.2): 56 data bytes
64 bytes from 172.17.5.2: seq=0 ttl=62 time=0.296 ms
64 bytes from 172.17.5.2: seq=1 ttl=62 time=0.218 ms
64 bytes from 172.17.5.2: seq=2 ttl=62 time=0.204 ms
64 bytes from 172.17.5.2: seq=3 ttl=62 time=0.215 ms
64 bytes from 172.17.5.2: seq=4 ttl=62 time=0.231 ms
64 bytes from 172.17.5.2: seq=5 ttl=62 time=0.216 ms
^C
--- 172.17.5.2 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.204/0.230/0.296 ms
/ # 

查看etcd注册的ip地址

/opt/etcd/bin/etcdctl
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem
--endpoints="https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379"
ls /coreos.com/network/subnets

 

[root@k8s ~]# /opt/etcd/bin/etcdctl \
> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
> --endpoints="https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379" \
> ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.100.0-24
/coreos.com/network/subnets/172.17.5.0-24
[root@k8s ~]# 

/opt/etcd/bin/etcdctl
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem
--endpoints="https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379"
get /coreos.com/network/subnets/172.17.100.0-24

 

[root@k8s ~]# /opt/etcd/bin/etcdctl \
> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
> --endpoints="https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379" \
> get /coreos.com/network/subnets/172.17.100.0-24
{"PublicIP":"10.0.52.6","BackendType":"vxlan","BackendData":{"VtepMAC":"9e:b7:83:66:98:23"}}
[root@k8s ~]# 
  • PublicIP: 节点ip地址
  • BackendType: 类型
  • VtepMAC: 虚拟的mac

查看路由表

 

node1的路由表:
[root@k8s cfg]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         gateway         0.0.0.0         UG    100    0        0 ens192
10.0.52.0       0.0.0.0         255.255.255.0   U     100    0        0 ens192
172.17.5.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.17.100.0    172.17.100.0    255.255.255.0   UG    0      0        0 flannel.1
node2的路由表:
[root@k8s docker]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         gateway         0.0.0.0         UG    100    0        0 ens192
10.0.52.0       0.0.0.0         255.255.255.0   U     100    0        0 ens192
172.17.5.0      172.17.5.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.100.0    0.0.0.0         255.255.255.0   U     0      0        0 docker0