StatefulSet
- 1. 简介
- 2. 示例
-
- 2.1 环境清理
- 2.2 配置
- 2.3 测试
- 2.4 补充
- 3. statefullset部署mysql主从集群
-
- 3.1 配置
- 3.2 测试
1. 简介
-
StatefulSet将应用状态抽象成了两种情况:
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
存储状态:应用的多个实例分别绑定了不同存储数据。 -
StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号),从0开始。
-
StatefulSet还会为每一个Pod分配并创建一个同样编号的PVC。这样,kubernetes就可以通过Persistent Volume机制为这个PVC绑定对应的PV,从而保证每一个Pod都拥有一个独立的Volume。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录。
2. 示例
## 1. 清理环境
[root@server2 nfs-client]# kubectl delete -f demo.yaml
[root@server2 nfs-client]# kubectl delete -f pvc.yaml ## 2. 配置
[root@server2 volumes]# pwd
/root/volumes
[root@server2 volumes]# mkdir statefulset
[root@server2 volumes]# cd statefulset/
[root@server2 statefulset]# vim service.yaml
[root@server2 statefulset]# cat service.yaml ##实验文件
apiVersion: v1 ##StatefulSet如何通过Headless Service维持Pod的拓扑状态
kind: Service
metadata:name: nginx-svclabels:app: nginx
spec:ports:- port: 80name: webclusterIP: None ##无头服务selector:app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:serviceName: "nginx-svc"replicas: 2 ##副本数,如果只删除pod可以改为0。全部删除需要删除控制器selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: myapp:v1 ##myapp其实就是nginxports:- containerPort: 80name: webvolumeMounts: #PV和PVC的设计,使得StatefulSet对存储状态的管理成为了可能:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:storageClassName: managed-nfs-storage ##scaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi[root@server2 statefulset]# kubectl apply -f service.yaml
service/nginx-svc created
statefulset.apps/web created
[root@server2 statefulset]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 9s
web-1 1/1 Running 0 5s
[root@server2 statefulset]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-25c0739c-3a00-442e-8287-2b2f216cb676 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 15s
pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 11s
[root@server2 statefulset]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-25c0739c-3a00-442e-8287-2b2f216cb676 1Gi RWO managed-nfs-storage 17s
www-web-1 Bound pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59 1Gi RWO managed-nfs-storage 13s
[root@server2 statefulset]# kubectl describe svc nginx-svc ##查看svc详细信息
[root@server2 statefulset]# dig -t A web-0.nginx-svc.default.svc.cluster.local @10.96.0.10 ##可以通过dig查看
[root@server2 statefulset]# dig -t A nginx-svc.default.svc.cluster.local @10.96.0.10
## 3. 测试
[root@server1 nfsdata]# pwd
/nfsdata
[root@server1 nfsdata]# ls
archived-pvc-2262d8b4-c660-4301-aad5-2ec59516f14e
archived-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf
default-www-web-0-pvc-25c0739c-3a00-442e-8287-2b2f216cb676
default-www-web-1-pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59[root@server1 nfsdata]# echo web-0 > default-www-web-0-pvc-25c0739c-3a00-442e-8287-2b2f216cb676/index.html
[root@server1 nfsdata]# echo web-1 > default-www-web-1-pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59/index.html[root@server2 statefulset]# kubectl run demo --image=busyboxplus -it ##测试
/ # nslookup nginx-svc
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-svc
Address 1: 10.244.22.11 web-0.nginx-svc.default.svc.cluster.local
Address 2: 10.244.141.211 web-1.nginx-svc.default.svc.cluster.local
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
2.1 环境清理
2.2 配置
2.3 测试
2.4 补充
-
kubectl 弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用.
$ kubectl get statefulsets <stateful-set-name>
改变StatefulSet副本数量:
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas> -
如果StatefulSet开始由 kubectl apply 或 kubectl create --save-config 创建,更新StatefulSet manifests中的 .spec.replicas, 然后执行命令 kubectl apply:
$ kubectl apply -f <stateful-set-file-updated> -
也可以通过命令 kubectl edit 编辑该字段:
$ kubectl edit statefulsets <stateful-set-name> -
使用 kubectl patch:
$ kubectl patch statefulsets -p ‘{“spec”:{“replicas”:<new-replicas>}}’
3. statefullset部署mysql主从集群
官网代码
[root@server2 mysql]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: mysqllabels:app: mysql
data:master.cnf: |# Apply this config only on the master.[mysqld]log-bin slave.cnf: |# Apply this config only on slaves.[mysqld]super-read-only
[root@server2 mysql]# cat services.yaml
apiVersion: v1
kind: Service
metadata:name: mysqllabels:app: mysql
spec:ports:- name: mysqlport: 3306clusterIP: Noneselector:app: mysql
---
apiVersion: v1
kind: Service
metadata:name: mysql-readlabels:app: mysql
spec:ports:- name: mysqlport: 3306selector:app: mysql
[root@server2 mysql]# cat statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: mysql
spec:selector:matchLabels:app: mysqlserviceName: mysqlreplicas: 3template:metadata:labels:app: mysqlspec:initContainers:- name: init-mysqlimage: mysql:5.7command:- bash- "-c"- |set -ex# Generate mysql server-id from pod ordinal index.[[ `hostname` =~ -([0-9]+)$ ]] || exit 1ordinal=${
BASH_REMATCH[1]}echo [mysqld] > /mnt/conf.d/server-id.cnf# Add an offset to avoid reserved server-id=0 value.echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf# Copy appropriate conf.d files from config-map to emptyDir.if [[ $ordinal -eq 0 ]]; thencp /mnt/config-map/master.cnf /mnt/conf.d/elsecp /mnt/config-map/slave.cnf /mnt/conf.d/fi volumeMounts:- name: confmountPath: /mnt/conf.d- name: config-mapmountPath: /mnt/config-map- name: clone-mysqlimage: xtrabackup:1.0command:- bash- "-c"- |set -ex# Skip the clone if data already exists.[[ -d /var/lib/mysql/mysql ]] && exit 0# Skip the clone on master (ordinal index 0).[[ `hostname` =~ -([0-9]+)$ ]] || exit 1ordinal=${
BASH_REMATCH[1]}[[ $ordinal -eq 0 ]] && exit 0# Clone data from previous peer.ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql# Prepare the backup.xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dcontainers:- name: mysqlimage: mysql:5.7env:- name: MYSQL_ALLOW_EMPTY_PASSWORDvalue: "1"ports:- name: mysqlcontainerPort: 3306volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 500mmemory: 512MilivenessProbe:exec:command: ["mysqladmin", "ping"]initialDelaySeconds: 30periodSeconds: 10timeoutSeconds: 5readinessProbe:exec:# Check we can execute queries over TCP (skip-networking is off).command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]initialDelaySeconds: 5periodSeconds: 2timeoutSeconds: 1 - name: xtrabackupimage: xtrabackup:1.0ports:- name: xtrabackupcontainerPort: 3307command:- bash- "-c"- |set -excd /var/lib/mysql# Determine binlog position of cloned data, if any.if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then# XtraBackup already generated a partial "CHANGE MASTER TO" query# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in# Ignore xtrabackup_binlog_info in this case (it's useless).rm -f xtrabackup_slave_info xtrabackup_binlog_infoelif [[ -f xtrabackup_binlog_info ]]; then# We're cloning directly from master. Parse binlog position.[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1rm -f xtrabackup_binlog_info xtrabackup_slave_infoecho "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.infi# Check if we need to complete a clone by starting replication.if [[ -f change_master_to.sql.in ]]; thenecho "Waiting for mysqld to be ready (accepting connections)"until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; doneecho "Initializing replication from clone position"mysql -h 127.0.0.1 \-e "$(<change_master_to.sql.in), \MASTER_HOST='mysql-0.mysql', \MASTER_USER='root', \MASTER_PASSWORD='', \MASTER_CONNECT_RETRY=10; \START SLAVE;" || exit 1# In case of container restart, attempt this at-most-once.mv change_master_to.sql.in change_master_to.sql.origfi# Start a server to send backups when requested by peers.exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 100mmemory: 100Mivolumes:- name: confemptyDir: {
}- name: config-mapconfigMap:name: mysqlvolumeClaimTemplates:- metadata:name: dataspec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 5Gi
##镜像拉取
[root@server1 nfsdata]# docker pull mysql:5.7
[root@server1 nfsdata]# docker tag mysql:5.7 reg.westos.org/library/mysql:5.7
[root@server1 nfsdata]# docker push reg.westos.org/library/mysql:5.7 [root@server1 nfsdata]# docker pull yizhiyong/xtrabackup
[root@server1 nfsdata]# docker tag yizhiyong/xtrabackup:latest reg.westos.org/library/xtrabackup:1.0
[root@server1 nfsdata]# docker push reg.westos.org/library/xtrabackup:1.0
[root@server2 mysql]# kubectl apply -f configmap.yaml
configmap/mysql created
[root@server2 mysql]# kubectl describe cm mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>Data
====
master.cnf:
----
# Apply this config only on the master.
[mysqld]
log-bin slave.cnf:
----
# Apply this config only on slaves.
[mysqld]
super-read-onlyEvents: <none>[root@server2 mysql]# kubectl apply -f services.yaml
service/mysql created
service/mysql-read created
[root@server2 mysql]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d5h
mysql ClusterIP None <none> 3306/TCP 6s
mysql-read ClusterIP 10.109.234.245 <none> 3306/TCP 6s[root@server2 mysql]# yum install mariadb -y ##需要安装数据库
[root@server2 mysql]# kubectl apply -f statefulset.yaml ##
[root@server2 mysql]# kubectl get pod
[root@server2 mysql]# kubectl get pvc
[root@server2 mysql]# kubectl get pv
3.1 配置
3.2 测试