当前位置: 代码迷 >> 综合 >> 阿里云通过nginx-ingress-controller 暴露服务
  详细解决方案

阿里云通过nginx-ingress-controller 暴露服务

热度:83   发布时间:2024-02-02 02:57:42.0

本文有参考:阿里云的文档:https://developer.aliyun.com/article/721569。然后参考对照实际部署的k8s集群。梳理部署在k8s的不同应用如何在公网被不同的域名访问到。

理解:
实际nginx-ingress-controller 就是在k8s集群中创建一个具有类似nginx作用的应用。此nginx 应用负责接受所有来自不同域名的请求,然后根据不同ingress 定义的规则转发到对应的service。


  1. 看一下nginx-ingress service
apiVersion: v1
kind: Service
metadata:annotations:kubectl.kubernetes.io/last-applied-configuration: >{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx-ingress-lb"},"name":"nginx-ingress-lb","namespace":"kube-system"},"spec":{"ports":[{"name":"http","port":80,"targetPort":80},{"name":"https","port":443,"targetPort":443}],"selector":{"app":"ingress-nginx"},"type":"LoadBalancer"}}creationTimestamp: '2019-06-04T11:49:59Z'labels:app: nginx-ingress-lbname: nginx-ingress-lbnamespace: kube-systemresourceVersion: '147529875'selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-lbuid: 6106e164-7f80-11e8-b128-00163e10ac41
spec:clusterIP: 10.1.1.135externalTrafficPolicy: LocalhealthCheckNodePort: 31999ports:- name: httpnodePort: 30613port: 80protocol: TCPtargetPort: 80- name: httpsnodePort: 31070port: 443protocol: TCPtargetPort: 443selector:app: ingress-nginxsessionAffinity: Nonetype: LoadBalancer
status:loadBalancer:ingress:- ip: xx.xx.xxx.xxx 

解释: 该service选择暴露服务的type 为LoadBalancer,这样就可以被外网访问到。LoadBalancer的实际ip地址为:xx.xx.xxx.xxx。假设集群中有A 、B两个应用,分别需要使用 a.com、b.com 访问。那么将这两个域名的的A记录的解析值都设置为:LoadBalancer的实际ip地址为:xx.xx.xxx.xxx。


  1. 上文1中只是创建了一个service,指定了service对应的应用。在运行的集群中使用命令可以看到对应的deploy。
kubectl get  deploy  -n kube-system -l app=ingress-nginx
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
nginx-ingress-controller   1/1     1            1           2y24d

如下是deploy的yaml:

apiVersion: apps/v1
kind: Deployment
metadata:annotations:component.revision: v5component.version: v0.22.0deployment.kubernetes.io/revision: '2'kubectl.kubernetes.io/last-applied-configuration: >{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"component.revision":"v5","component.version":"v0.22.0"},"labels":{"app":"ingress-nginx"},"name":"nginx-ingress-controller","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true","scheduler.alpha.kubernetes.io/critical-pod":""},"labels":{"app":"ingress-nginx"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app","operator":"In","values":["ingress-nginx"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"containers":[{"args":["/nginx-ingress-controller","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--annotations-prefix=nginx.ingress.kubernetes.io","--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"registry-vpc.cn-beijing.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http"},{"containerPort":443,"name":"https"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"resources":{"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":33},"volumeMounts":[{"mountPath":"/etc/localtime","name":"localtime","readOnly":true}]}],"initContainers":[{"command":["/bin/sh","-c","mount-o remount rw /proc/sys\nsysctl -w net.core.somaxconn=65535\nsysctl -wnet.ipv4.ip_local_port_range=\"1024 65535\"\nsysctl -wfs.file-max=1048576\nsysctl -w fs.inotify.max_user_instances=16384\nsysctl-w fs.inotify.max_user_watches=524288\nsysctl -wfs.inotify.max_queued_events=16384\n"],"image":"registry-vpc.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2","name":"init-sysctl","securityContext":{"capabilities":{"add":["SYS_ADMIN"],"drop":["ALL"]}}}],"nodeSelector":{"beta.kubernetes.io/os":"linux"},"serviceAccountName":"nginx-ingress-controller","volumes":[{"hostPath":{"path":"/etc/localtime","type":"File"},"name":"localtime"}]}}}}creationTimestamp: '2018-07-04T11:49:59Z'generation: 2labels:app: ingress-nginxname: nginx-ingress-controllernamespace: kube-systemresourceVersion: '169291104'selfLink: /apis/apps/v1/namespaces/kube-system/deployments/nginx-ingress-controlleruid: 610c6dac-7f80-11e8-b128-00163e10ac41
spec:progressDeadlineSeconds: 600replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: ingress-nginxstrategy:rollingUpdate:maxSurge: 1maxUnavailable: 1type: RollingUpdatetemplate:metadata:annotations:prometheus.io/port: '10254'prometheus.io/scrape: 'true'scheduler.alpha.kubernetes.io/critical-pod: ''labels:app: ingress-nginxspec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- ingress-nginxtopologyKey: kubernetes.io/hostnameweight: 100containers:- args:- /nginx-ingress-controller- '--configmap=$(POD_NAMESPACE)/nginx-configuration'- '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'- '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'- '--annotations-prefix=nginx.ingress.kubernetes.io'- '--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb'- '--v=2'env:- name: POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespaceimage: >-registry-vpc.cn-beijing.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyunimagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10name: nginx-ingress-controllerports:- containerPort: 80name: httpprotocol: TCP- containerPort: 443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10resources:requests:cpu: 100mmemory: 70MisecurityContext:capabilities:add:- NET_BIND_SERVICEdrop:- ALLrunAsUser: 33terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /etc/localtimename: localtimereadOnly: truednsPolicy: ClusterFirstinitContainers:- command:- /bin/sh- '-c'- |mount -o remount rw /proc/syssysctl -w net.core.somaxconn=65535sysctl -w net.ipv4.ip_local_port_range="1024 65535"sysctl -w fs.file-max=1048576sysctl -w fs.inotify.max_user_instances=16384sysctl -w fs.inotify.max_user_watches=524288sysctl -w fs.inotify.max_queued_events=16384image: 'registry-vpc.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2'imagePullPolicy: IfNotPresentname: init-sysctlresources: {}securityContext:capabilities:add:- SYS_ADMINdrop:- ALLterminationMessagePath: /dev/termination-logterminationMessagePolicy: FilenodeSelector:beta.kubernetes.io/os: linuxrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}serviceAccount: nginx-ingress-controllerserviceAccountName: nginx-ingress-controllerterminationGracePeriodSeconds: 30volumes:- hostPath:path: /etc/localtimetype: Filename: localtime
status:availableReplicas: 1conditions:- lastTransitionTime: '2018-07-04T11:51:09Z'lastUpdateTime: '2018-07-04T11:51:09Z'message: Deployment has minimum availability.reason: MinimumReplicasAvailablestatus: 'True'type: Available- lastTransitionTime: '2018-07-04T11:51:09Z'lastUpdateTime: '2020-04-01T22:02:46Z'message: >-ReplicaSet "nginx-ingress-controller-5fc8b968d6" has successfullyprogressed.reason: NewReplicaSetAvailablestatus: 'True'type: ProgressingobservedGeneration: 2readyReplicas: 1replicas: 1updatedReplicas: 1

  1. 继续使用命令可以看到deploy对应的pod
 kubectl get  pod  -n kube-system -l app=ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-5fc8b968d6-s8gmc   1/1     Running   1          78d

使用命令进入到pod中查看/etc/nginx/nginx.conf ,我们可以看到在ingress配置的域名在配置nginx.conf中出现。看到这里就应该明白了nginx-ingress-controller 实际类似nginx 的意思。


  1. 查看一个ingress的yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:annotations:kubectl.kubernetes.io/last-applied-configuration: >{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"ssyl-admin-ingress","namespace":"default"},"spec":{"rules":[{"host":"admin.com","http":{"paths":[{"backend":{"serviceName":"service-admin","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["admin.com"],"secretName":"admin.com"}]}}nginx.ingress.kubernetes.io/service-weight: ''creationTimestamp: '2019-09-25T07:01:52Z'generation: 3name: admin-ingressnamespace: defaultresourceVersion: '169280718'selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/admin-ingressuid: e15d1ea1-c090-11e8-b6f2-00163e109a9e
spec:rules:- host: admin.comhttp:paths:- backend:serviceName: service-adminservicePort: 80path: /tls:- hosts:- admin.comsecretName: admin.com
status:loadBalancer:ingress:- ip: xx.xx.xxx.xxx

解释:该ingress的作用就是将域名为admin.com的请求转发到名称为service-admin的服务上。域名admin.com 可以在3中的配置nginx.conf上看到。


好了,到这里就梳理完毕。在总结一下吧:不同域名解析到同一个LoadBalancer的 IP,假设为ip-a;然后nginx-ingress-controller 的service 的type 为LoadBalancer,ip为前面说的ip-a。这样不同域名的请求就被转发到了nginx-ingress-controller 的service。而nginx-ingress-controller 的service所对应的pod会根据 ingress定义的规则,再将不同域名的请求转发到不同的service上。