当前位置: 代码迷 >> 综合 >> (一) 在 vSphere 部署高可用的 KubeSphere 3.0 -公测版
  详细解决方案

(一) 在 vSphere 部署高可用的 KubeSphere 3.0 -公测版

热度:60   发布时间:2024-02-12 20:02:58.0

文章目录

  • 在 vSphere 部署高可用的 KubeSphere
    • 1. 前提条件
    • 2. 部署架构
    • 3. 创建主机
    • 4. 部署 keepalived+haproxy
      • 1. yum 安装
      • 2. 配置 haproxy
      • 3. 配置 keepalived
      • 4. 验证可用性
    • 5. 获取安装程序可执行文件
    • 6. 创建多节点群集
        • 1. kubekey 部署 k8s 集群
        • 2. 多集群配置
        • 3. 输出结果

在 vSphere 部署高可用的 KubeSphere

对于生产环境,我们需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver,kube-scheduler 和 kube-controller-manager)都在同一主节点上运行,则一旦主节点出现故障,Kubernetes 和 KubeSphere 将不可用。因此,我们需要通过为负载均衡器配置多个主节点来设置高可用性集群。您可以使用任何云负载平衡器或任何硬件负载平衡器(例如F5)。另外,Keepalived 和HAproxy 或 Nginx 也是创建高可用性集群的替代方法。

本教程为您提供了一个示例,说明如何使用 keepalived + haproxy 对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。

1. 前提条件

  • 请遵循该指南,确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config yaml 文件的详细信息,请参阅多节点安装。本教程重点介绍如何配置负载均衡器。
  • 您需要一个 VMware vSphere 帐户来创建VM资源。
  • 考虑到数据的持久性,对于生产环境,我们建议您准备持久性存储并预先创建 StorageClass 。为了进行开发和测试,您可以使用集成的 OpenEBS 直接将 LocalPV设置为存储服务。

2. 部署架构

在这里插入图片描述

3. 创建主机

本示例创建 9 台 CentOS Linux release 7.6.1810(Core) 的虚拟机,每台配置为 8 Core 16 GB 100 G。

主机 IP 主机名称 角色
10.10.71.214 master1 master1, etcd
10.10.71.73 master2 master2, etcd
10.10.71.62 master3 master3, etcd
10.10.71.75 node1 node
10.10.71.76 node2 node
10.10.71.79 node3 node
10.10.71.67 vip vip
10.10.71.77 lb-0 lb(keepalived + haproxy)
10.10.71.66 lb-1 lb(keepalived + haproxy)

选择可创建的资源池,点击右键-新建虚拟机(创建虚拟机入口请好几个,自己选择)

在这里插入图片描述
选择创建类型,创建新虚拟机。

在这里插入图片描述
填写虚拟机名称和存放文件夹。

在这里插入图片描述

选择计算资源。

在这里插入图片描述

选择存储。
在这里插入图片描述

选择兼容性,这里是 ESXi 7.0 及更高版本。
在这里插入图片描述

选择客户机操作系统,Linux CentOS 7 (64 位)。

在这里插入图片描述

自定义硬件,这里操作系统是挂载的 ISO 文件(打开电源时连接),网络是 VLAN71(勾选)。

在这里插入图片描述

清单,确认无误后,点击确定。

在这里插入图片描述

4. 部署 keepalived+haproxy

1. yum 安装

#在主机为lb-0和lb-1中部署keepalived+haproxy
#即IP为10.10.71.77与10.10.71.66的服务器上安装部署haproxy、keepalived、psmisc
yum install keepalived haproxy psmisc -y

2. 配置 haproxy

在IP为 10.10.71.77 与 10.10.71.66 的服务器 ,配置 haproxy (两台 lb 机器配置一致即可,注意后端服务地址)。

#Haproxy 配置 /etc/haproxy/haproxy.cfg
globallog         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultslog                     globaloption                  httplogoption                  dontlognulltimeout connect         5000timeout client          5000timeout server          5000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kube-apiserverbind *:6443mode tcpoption tcplogdefault_backend kube-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kube-apiservermode tcpoption tcplogbalance     roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server kube-apiserver-1 10.10.71.214:6443 checkserver kube-apiserver-2 10.10.71.73:6443 checkserver kube-apiserver-3 10.10.71.62:6443 check
#启动之前检查语法是否有问题
haproxy -f /etc/haproxy/haproxy.cfg -c
#启动 Haproxy,并设置开机自启动
systemctl restart haproxy && systemctl enable haproxy
#停止 Haproxy
systemctl stop haproxy

3. 配置 keepalived

# 主 haproxy 77 lb-0-10.10.71.77
#/etc/keepalived/keepalived.conf
global_defs {notification_email {}smtp_connect_timeout 30    #连接超时时间router_id LVS_DEVEL01 ##相当于给这个服务器起个昵称vrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}
vrrp_script chk_haproxy {script "killall -0 haproxy"interval 2weight 2
}
vrrp_instance haproxy-vip {state MASTER  #主服务器 是MASTERpriority 100  #主服务器优先级要比备服务器高interface ens192                        #实例绑定的网卡virtual_router_id 60 #定义一个热备组,可以认为这是60号热备组advert_int 1 #1秒互相通告一次,检查对方死了没。authentication {auth_type PASS #认证类型auth_pass 1111 #认证密码 这些相当于暗号}unicast_src_ip 10.10.71.77      #当前机器地址unicast_peer {10.10.71.66                       #peer中其它机器地址}virtual_ipaddress {#vip地址10.10.71.67/24 }track_script {chk_haproxy}
}
#备 haproxy 66 lb-1-10.10.71.66
#/etc/keepalived/keepalived.conf
global_defs {notification_email {}router_id LVS_DEVEL02 ##相当于给这个服务器起个昵称vrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}
vrrp_script chk_haproxy {script "killall -0 haproxy"interval 2weight 2
}
vrrp_instance haproxy-vip {state BACKUP #备份服务器 是 backuppriority 90 #优先级要低(把备份的90修改为100)interface ens192                        #实例绑定的网卡virtual_router_id 60advert_int 1authentication {auth_type PASSauth_pass 1111}unicast_src_ip 10.10.71.66      #当前机器地址unicast_peer {10.10.71.77                         #peer 中其它机器地址}virtual_ipaddress {#加/2410.10.71.67/24 }track_script {chk_haproxy}
}
#启动 keepalived,设置开机自启动
systemctl restart keepalived && systemctl enable keepalived
systemctl stop keepaliv
systemctl start keepalived    #开启 keepalived服务

4. 验证可用性

  • 使用 ip a s 查看各 lb 节点 vip 绑定情况
  • 暂停vip所在节点 haproxy:systemctl stop haproxy
  • 再次使用 ip a s 查看各 lb 节点 vip 绑定情况,查看 vip 是否发生漂移
  • 或者使用 systemctl status -l keepalived 命令查看

5. 获取安装程序可执行文件

#下载 installer 至一台目标机器
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
chmod +x kk

6. 创建多节点群集

您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。

1. kubekey 部署 k8s 集群

# 创建配置文件(一个示例配置文件)|包含 kubesphere 的配置文件
./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml
#如果重复安装,镜像就不用再下载了,可以 skip
./kk create cluster -f ~/config-sample.yaml --debug --skip-pull-images
#删除 cluster
./kk delete cluster -f ~/config-sample.yaml --debug --skip-pull-images
#小提示,如果安装过程中碰到 `Failed to add worker to cluster: Failed to exec command...`
kubeadm reset

2. 多集群配置

#vi ~/config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:name: config-sample
spec:hosts:- {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: P@ssw0rd!}- {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: P@ssw0rd!}- {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: P@ssw0rd!}- {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: P@ssw0rd!}- {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: P@ssw0rd!}- {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: P@ssw0rd!}roleGroups:etcd:- master1- master2- master3master: - master1- master2- master3worker:- node1- node2- node3controlPlaneEndpoint:domain: lb.kubesphere.local# vipaddress: "10.10.71.67"                    port: "6443"kubernetes:version: v1.17.9imageRepo: kubesphereclusterName: cluster.localmasqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]proxyMode: ipvs  # mode specifies which proxy mode to use. [Default: ipvs]network:plugin: calicocalico:ipipMode: Always  # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]vxlanMode: Never  # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]kubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18registry:registryMirrors: []insecureRegistries: []privateRegistry: ""storage:defaultStorageClass: localVolumelocalVolume:storageClassName: local  ---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.0.0
spec:local_registry: ""persistence:storageClass: ""authentication:jwtSecret: ""etcd:monitoring: true # Whether to install etcd monitoring dashboardendpointIps: 192.168.0.7,192.168.0.8,192.168.0.9  # etcd cluster endpointIpsport: 2379 # etcd porttlsEnable: truecommon:mysqlVolumeSize: 20Gi # MySQL PVC sizeminioVolumeSize: 20Gi # Minio PVC sizeetcdVolumeSize: 20Gi  # etcd PVC sizeopenldapVolumeSize: 2Gi   # openldap PVC sizeredisVolumSize: 2Gi # Redis PVC sizees:  # Storage backend for logging, tracing, events and auditing.elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even numberelasticsearchDataReplicas: 1 # total number of data nodeselasticsearchMasterVolumeSize: 4Gi   # Volume size of Elasticsearch master nodeselasticsearchDataVolumeSize: 20Gi    # Volume size of Elasticsearch data nodeslogMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log# externalElasticsearchUrl:# externalElasticsearchPort:console:enableMultiLogin: false # enable/disable multiple sing on, it allows an account can be used by different users at the same time.port: 30880alerting:                # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.enabled: falseauditing:                # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.enabled: false devops:                  # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Imageenabled: falsejenkinsMemoryLim: 2Gi      # Jenkins memory limitjenkinsMemoryReq: 1500Mi   # Jenkins memory requestjenkinsVolumeSize: 8Gi     # Jenkins volume sizejenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parametersjenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents:                  # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.enabled: falselogging:                 # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.enabled: falselogsidecarReplicas: 2metrics_server:                    # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).enabled: truemonitoring:                        #prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.prometheusMemoryRequest: 400Mi   # Prometheus request memoryprometheusVolumeSize: 20Gi       # Prometheus PVC sizealertmanagerReplicas: 1 # AlertManager Replicasmulticluster:clusterRole: none  # host | member | none # You can install a solo cluster, or specify it as the role of host or member clusternetworkpolicy:       # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).enabled: false notification:        # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.enabled: falseopenpitrix:          # Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle managementenabled: falseservicemesh:         # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topologyenabled: false

3. 输出结果

**************************************************
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://10.10.71.214:30880
Account: admin
Password: P@88w0rd
NOTES:1. After logging into the console, please check themonitoring status of service components inthe "Cluster Management". If any service is notready, please wait patiently until all components are ready.2. Please modify the default password after login.
#####################################################
https://kubesphere.io             2020-08-15 23:32:12
#####################################################
  相关解决方案