当前位置: 代码迷 >> 综合 >> 记一次详细安装 kubernetes、istio 步骤
  详细解决方案

记一次详细安装 kubernetes、istio 步骤

热度:29   发布时间:2024-01-25 13:47:02.0

写在前面

先说一下我的机器配置,热乎乎的裸机,一共三台配置如下

  • 10.20.1.103 4C 8G 磁盘 50G node4 master centos7
  • 10.20.1.104 4C 8G 磁盘 50G node5 node centos7
  • 10.20.1.105 4C 8G 磁盘 50G node6 node centos7

我的安装方法是基于 k8s 官方的推荐方法 kubeadm,istio 同样也是,如果想看原文访问以下连接。

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://istio.io/docs/setup/getting-started/

我在这里默认你已经安装好了 docker
基本就这些了,下面就开始了!

kubernetes

step1

先安装一些 linux 常用的软件,很正常

yum install -y vim wget

step2

验证你的每个节点是否唯一,一遍情况新的机器都没有任何问题。

ip link
sudo cat /sys/class/dmi/id/product_uuid

step3

每台机器开放端口不用多说

// master 执行
[root@localhost ~]# firewall-cmd --zone=public --add-port=6443/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=2379/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=2380/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10251/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10252/tcp --permanent
success
[root@localhost ~]# firewall-cmd --reload
success// node 执行
[root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent
success
[root@localhost ~]# firewall-cmd --reload
success

step4

安装 kubeadm, kubelet 和 kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

截止到上一步,如果你不科学上网你不会安装成功的,失败如下图。这和正常。

[root@localhost ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.huaweicloud.com* extras: mirrors.huaweicloud.com* updates: mirrors.huaweicloud.com
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable"
Trying other mirror.
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable"
Trying other mirror.
^Chttps://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#56 - "Callback aborted"
Trying other mirror.

因为我们访问不到 google,所以我们修改一下源,去其他路径下载,修改之后如下面代码块所示:实际只是注释了两行代码而已。

vim /etc/yum.repos.d/kubernetes.repo
name=Kubernetes
# baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
# gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*

验证安装,可以看到我们已经安装成功了

[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@localhost ~]# kubelet --version
Kubernetes v1.17.1
[root@localhost ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:02:14Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

启动 kubelet

systemctl enable --now kubelet
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
systemctl daemon-reload
systemctl restart kubelet

截止到这里已经成功一半了。下一步是启动集群。

step5

启动集群

// 先启动 master
[root@localhost ~]# kubeadm init --kubernetes-version=v1.17.0
W0117 17:16:23.316968   11538 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0117 17:16:23.317084   11538 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or highe

我们执行命令之后,可以看到一些警告以及错误,那么我们执行以下命令,关闭防火墙以及交换分区

systemctl stop firewalld
swapoff -a

再次执行可以看到只有一个警告,这个警告我们修改 docker 的驱动即可。如下图所示,修改之后重启 docker

[root@node1 ~]# vim /etc/docker/daemon.json {"exec-opts":["native.cgroupdriver=systemd"]
}

重启之后我们再次执行,此时肯定会再次失败。很简单可安装时的失败是一致的。我们访问不到 google。也就是我们 pull 不下来镜像。

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1

所以我们现在的任务就是解决下面这些依赖的镜像即可。

[root@node1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.17.0             7d54289267dc        5 weeks ago         116MB
k8s.gcr.io/kube-scheduler            v1.17.0             78c190f736b1        5 weeks ago         94.4MB
k8s.gcr.io/kube-apiserver            v1.17.0             0cae8d5cc64c        5 weeks ago         171MB
k8s.gcr.io/kube-controller-manager   v1.17.0             5eb3b7486872        5 weeks ago         161MB
k8s.gcr.io/coredns                   1.6.5               70f311871ae1        2 months ago        41.6MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        2 months ago        288MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB