文章目录
- 一、 控制器
-
- 1.1 Pod与控制器之间的关系
- 1.2 Deployment
-
- 特点:
- 应用场景:web服务
- 测试
- 1.2 SatefulSet
-
- 官方文档
- 特点
- 应用场景:数据库
- 常规service和无头服务区别
- 测试
-
- 配置dns服务,使用yaml文件创建
- 1.3 有状态与无状态的区别
- 1.4 DaemonSet
- 1.5 Job
-
- 官方文档
- 测试
- 1.6 CronJob
-
- 官方文档
- 测试
一、 控制器
控制器:又称之为工作负载,分别包含以下类型控制器
- Deployment,适合无状态的服务部署
- StatefulSet,适合有状态的服务部署
- DaemonSet,一次部署,所有的node节点都会部署,包括新加入的node节点,同样会被部署。适用于集群、日志、监控。
- Job,一次性执行任务,执行完任务容器关闭。
- CronJob,周期性执行任务。
1.1 Pod与控制器之间的关系
kubernetes内部会有很多的控制器,这些控制器相当于一个状态机,用来控制Pod的具体状态和行为。
controllers:在集群上管理和运行容器的对象通过label-selector相关联
Pod通过控制器实现应用的运维,如伸缩,升级等
1.2 Deployment
特点:
- 部署无状态应用
- 管理Pod和ReplicaSet
- 具有上线部署、副本设定、滚动升级、回滚等功能
- 提供声明式更新,例如只更新一个新的Image
应用场景:web服务
测试
[root@master test]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.15.4ports:- containerPort: 80
[root@master test]# kubectl create -f nginx-deployment.yaml ##创建pod
deployment.apps/nginx-deployment created
[root@master test]# kubectl get pods,deploy,rs ##查看创建的pod、deployment、replicasets资源
NAME READY STATUS RESTARTS AGE
pod/foo 0/1 Completed 0 22h
pod/frontend 2/2 Running 0 23h
pod/liveness-exec 1/1 Running 70 12h
pod/nginx-deployment-d55b94fd-2s2zf 1/1 Running 0 26m
pod/nginx-deployment-d55b94fd-bpn9w 1/1 Running 0 26m
pod/nginx-deployment-d55b94fd-qlvtz 1/1 Running 0 26mNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/nginx-deployment 3 3 3 3 26mNAME DESIRED CURRENT READY AGE
replicaset.extensions/nginx-deployment-d55b94fd 3 3 3 26m
查看控制器,并可以对控制器的参数进行修改
kubectl edit deployment/nginx-deployment
查看Deployment历史版本
[root@master test]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
1.2 SatefulSet
官方文档
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
特点
- 部署有状态应用
- 解决Pod独立生命周期,保持Pod启动顺序和唯一性
- 稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址发生变化,将无法使用)
- 有序,优雅的部署和扩展、删除和终止(例如:mysql主从关系,先启动主,再启动从)
- 有序,滚动更新
应用场景:数据库
常规service和无头服务区别
service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。
Headless service:就是无头服务,不需要cluster-IP,ClusterIP设置为none,直接绑定具体的Pod的IP。
测试
官方文档讲解
apiVersion: v1
kind: Service
metadata:name: nginxlabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:selector:matchLabels:app: nginx # has to match .spec.template.metadata.labelsserviceName: "nginx"replicas: 3 # by default is 1template:metadata:labels:app: nginx # has to match .spec.selector.matchLabelsspec:terminationGracePeriodSeconds: 10containers:- name: nginximage: k8s.gcr.io/nginx-slim:0.8ports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "my-storage-class"resources:requests:storage: 1Gi
名为 nginx 的 Headless Service 用来控制网络域名。
名为 web 的 StatefulSet 有一个 Spec,它表明将在独立的 3 个 Pod 副本中启动 nginx 容器。
volume ClaimTemplates 将通过 Persistent Volumes 驱动提供的 Persistent Volumes 来提供稳定的存储。
[root@master test]# vim nginx-headless.yaml
apiVersion: v1
kind: Service
metadata:name: nginx-headlesslabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
[root@master test]# kubectl create -f nginx-headless.yaml
service/nginx-headless created
[root@master test]# kubectl get svc ##查看创建的service资源,主要观察cluster-IP是否为none
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d1h
nginx-headless ClusterIP None <none> 80/TCP 49s
这时候我们可以看到EXTERNAL-IP也为空
[root@master test]# kubectl create -f nginx-deployment.yaml ##创建nginx的pod资源
deployment.apps/nginx-deployment created
[root@master test]# kubectl get ep ##这时候endpoints就出现了IP
NAME ENDPOINTS AGE
kubernetes 14.0.0.57:6443,14.0.0.87:6443 3d2h
nginx-headless 172.17.70.2:80,172.17.75.2:80,172.17.75.3:80 24m
配置dns服务,使用yaml文件创建
文件从网上下载,需要将最后的IP地址修改为集群内部的通信网段地址。
https://www.kubernetes.org.cn/4694.html
# Warning: This is a file generated from the base underscore template file: coredns.yaml.baseapiVersion: v1
kind: ServiceAccount '//系统账户,为pod中的进程和外部用户提供身份信息'
metadata:name: corednsnamespace: kube-system '//指定名称空间'labels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole '//创建访问权限的角色'
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding '//创建集群角色绑定的用户'
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap '//通过此服务来更改服务发现的工作方式'
metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists
data:Corefile: | '//是coreDNS的配置文件'.:53 {
errorshealthkubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure upstreamfallthrough in-addr.arpa ip6.arpa}prometheus :9153proxy . /etc/resolv.confcache 30loopreloadloadbalance}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsannotations:seccomp.security.alpha.kubernetes.io/pod: 'docker/default'spec:serviceAccountName: corednstolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule- key: "CriticalAddonsOnly"operator: "Exists"containers:- name: corednsimage: coredns/coredns:1.2.2imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5securityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.0.0.2 ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP
endpoint
endpoint是k8s集群中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址。service配置selector,endpoint controller才会自动创建对应的endpoint对象;否则,不会生成endpoint对象。
例如:
k8s集群中创建一个名为hello的service,就会生成一个同名的endpoint对象,ENDPOINTS就是service关联的pod的ip地址和端口。
一个 Service 由一组 backend Pod 组成。这些 Pod 通过 endpoints 暴露出来。 Service Selector 将持续评估,结果被 POST 到一个名称为 Service-hello 的 Endpoint 对象上。 当 Pod 终止后,它会自动从 Endpoint 中移除,新的能够匹配上 Service Selector 的 Pod 将自动地被添加到 Endpoint 中。 检查该 Endpoint,注意到 IP 地址与创建的 Pod 是相同的。现在,能够从集群中任意节点上使用 curl 命令请求 hello Service <CLUSTER-IP>:<PORT> 。 注意 Service IP 完全是虚拟的,它从来没有走过网络。
[root@master test]# kubectl create -f coredns.yaml ##根据yaml创建pod
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master test]# kubectl get pods -n kube-system ##查看命名空间
NAME READY STATUS RESTARTS AGE
coredns-56684f94d6-qvb7q 1/1 Running 1 2m46s
kubernetes-dashboard-7dffbccd68-74p6d 1/1 Running 1 3d21h
[root@master test]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:name: dns-test
spec:containers:- name: busyboximage: busybox:1.28.4args:- /bin/sh- -c- sleep 36000restartPolicy: Never
[root@master test]# kubectl create -f pod3.yaml
pod/dns-test created
在节点服务器上重启flannel组件与docker服务
在node节点上操作
[root@localhost ~]# systemctl restart flanneld.service
[root@localhost ~]# systemctl restart docker
将nginx服务端口发布出来
[root@master test]# vim nginx-service.yaml
apiVersion: v1
kind: Service
metadata:name: nginx-servicelabels:app: nginx
spec:type: NodePortports:- port: 80targetPort: 80selector:app: nginx
[root@master test]# kubectl create -f nginx-service.yaml
service/nginx-service created
[root@master test]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d
nginx-headless ClusterIP None <none> 80/TCP 22h
nginx-service NodePort 10.0.0.182 <none> 80:48645/TCP 31s
[root@master test]# kubectl exec -it dns-test sh ##进入pod进行测试
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
创建一个有状态的资源,进行DNS解析
[root@master test]# vim sts.yaml
apiVersion: v1
kind: Service
metadata:name: nginxlabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:name: nginx-statefulset namespace: default
spec:serviceName: nginx replicas: 3 selector:matchLabels: app: nginxtemplate: metadata:labels:app: nginx spec:containers:- name: nginximage: nginx:latest ports:- containerPort: 80
[root@master test]# kubectl create -f sts.yaml ##创建pod
service/nginx created
statefulset.apps/nginx-statefulset created
[root@master test]# kubectl exec -it dns-test sh
/ # nslookup nginx-statefulset-0.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: nginx-statefulset-0.nginx
Address 1: 172.17.28.2 nginx-statefulset-0.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-1.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: nginx-statefulset-1.nginx
Address 1: 172.17.65.4 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-2.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: nginx-statefulset-2.nginx
Address 1: 172.17.28.3 nginx-statefulset-2.nginx.default.svc.cluster.local
[root@master test]# kubectl get ep ##查看endpoint记录的pod的地址
NAME ENDPOINTS AGE
kubernetes 14.0.0.57:6443,14.0.0.87:6443 4d2h
nginx 172.17.28.2:80,172.17.28.3:80,172.17.65.4:80 10m
StatefulSet与Deployment区别:有身份的!
身份三要素:
域名 nginx-statefulset-0.nginx
主机名 nginx-statefulset-0
存储(PVC)
1.3 有状态与无状态的区别
无状态:
1)deployment 认为所有的pod都是一样的
2)不用考虑顺序的要求
3)不用考虑在哪个node节点上运行
4)可以随意扩容和缩容
有状态
1)实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
2)实例之间不对等的关系,以及依靠外部存储的应用。
1.4 DaemonSet
在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod
[root@master test]# vim da.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: nginx-deploymentlabels:app: nginx
spec:selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.15.4ports:- containerPort: 80
[root@master test]# kubectl delete -f . ##将当前目录下的yaml文件创建的资源删除,属于危险命令,慎用
[root@master test]# kubectl apply -f da.yaml
daemonset.apps/nginx-deployment created
1.5 Job
Job分为普通任务(Job)和定时任务(CronJob),指一次性执行。
应用场景:离线数据处理,视频解码等业务
官方文档
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
测试
示例中,重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数。
[root@master test]# vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:name: pi
spec:template:spec:containers:- name: piimage: perlcommand: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]restartPolicy: NeverbackoffLimit: 4
在node节点下载perl镜像,因为镜像比较大所以提前下载好
[root@localhost ~]# docker pull perl
在master上创建资源
[root@master test]# kubectl apply -f job.yaml
job.batch/pi created
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-ghqlv 1/1 Running 0 9m27s
nginx-deployment-zrgh4 1/1 Running 0 9m27s
pi-qj92z 0/1 Completed 0 27s
[root@master test]# kubectl logs pi-qj92z ##通过日志查看执行结果
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
1.6 CronJob
周期性任务,像Linux的Crontab一样。
周期性任务
应用场景:通知,备份
官方文档
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
测试
示例的作用:每分钟打印hello
[root@master test]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:name: hello
spec:schedule: "*/1 * * * *"jobTemplate:spec:template:spec:containers:- name: helloimage: busyboxargs:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure
[root@master test]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@master test]# kubectl get cronjob ##查看定时任务
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 <none> 3s
[root@master test]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1602824220-44z5r 0/1 Completed 0 32s
[root@master test]# kubectl logs hello-1602824220-44z5r ##查看周期计划的日志
Fri Oct 16 04:57:20 UTC 2020
Hello from the Kubernetes cluster