当前位置: 代码迷 >> 综合 >> kubernetes1.11手动搭建
  详细解决方案

kubernetes1.11手动搭建

热度:1   发布时间:2024-01-11 12:50:15.0

kubernetes1.11手动搭建

本次实验手动搭建一个内部的k8s集群,即不进行认证:1.通过vagrant和virtual box 构建vm.2.设计为一个etcd和一个node,初步,master先不搭建node,即目前一个master和一个
node.
  • pre

      一直想找一篇简单的手动搭建k8s的教程(不进行认证),以初步学习k8s, 形成一个  
    简单的框架.结果,不是没有对应版本,就是采用一些工具自动搭建,最后还是选择  
    带来的问题.
    
  • cluster info

    node1为master,node2为集群的工作节点node.
    
    name ip
    master/node1 192.168.59.11
    node2 192.168.59.12
    • 注:
      • 更简单的,搭建单节点k8s环境
        • 即,不搭建node2,将master和node1部署在一个物理机上
        • node1的配置方式同node2,仅要修改相应的参数(如 kubelet 地址)
  • 搭建过程

    • 简述

      1.获取所需要的kubernetes二进制文件,这里采用编译源码的方式进行
      2.获取etcd二进制文件,这里采用编译源码方式进行
      3.启动虚拟机,这里采用virtualbox、vagrant、ubuntu16.04作为宿主机
      4.master节点上配置etcd、kube-apiserver、kube-controller-manager、
      kube-scheduler
      5.node2节点上配置kubelet、kube-proxy
      6.检测集群环境,master上运行kubectl get nodes查看集群运行状态

    • 具体搭建参考:

      • kubernetes权威指南 第2章
      • domac的菜园子—深入学习Kubernetes(三):手工安装Kubernetes
    • 结果:

      root@node1:/etc/kubernetes# kubectl get nodes
      NAME      STATUS    ROLES     AGE       VERSION
      node2     Ready     <none>    1h        v1.11.3-beta.0.3+798ca4d3ceb5b2
  • QA

    • Q: k8s版本变化加大,参考内容为1.8之前的,很多启动参数发生了变化
    • A: 1.8+k8s的kubelet的–api-server参数取消,采用kubelet.kubeconfig文件的形式

      • kubelet启动参数
      
      #kubelet.service[Service]
      WorkingDirectory=/var/lib/kubelet
      EnvironmentFile=/etc/kubernetes/kubelet
      ExecStart=/usr/bin/kubelet  $KUBELET_ARGS  $KUBELET_ADDRESS#/etc/kubernetes/kubeletKUBELET_ADDRESS="--address=192.168.59.12"
      KUBELET_ARGS="--kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
      
      • bootstrap.kubeconfig(若需要认证,则有ssl等生成)
      apiVersion: v1
      clusters:
      - cluster:certificate-authority-data: server: https://192.168.59.11:8080name: kubernetes
      contexts:
      - context:cluster: kubernetesuser: kubelet-bootstrapname: default
      current-context: default
      kind: Config
      preferences: {}
      users:
      - name: kubelet-bootstrapuser:token: 
    • Q: master上kubectl get node 返回结果No resources found.
    • A: 这里出现原因是没有认证通过,本实验是在没有认证环境下进行的,但是
      bootstrap.kubeconfig中的server地址是需要认证的https,这里改成http即可访问

      • kubectl get nodes

        root@node1:/etc/kubernetes# kubectl get nodes
        NAME      STATUS    ROLES     AGE       VERSION
        node2     Ready     <none>    1h        v1.11.3-beta.0.3+798ca4d3ceb5b2
      • 解决过程

        • 查看kubelet服务的错误日志

          kubelet.service - Kubernetes Kubelet Server
          Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
          Active: active (running) since Sat 2018-08-25 13:49:07 UTC; 12h ago
          Main PID: 14611 (kubelet)
          Tasks: 12
          Memory: 43.4M
          CPU: 42.624s
          CGroup: /system.slice/kubelet.service
          └─14611 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --address=192.168.59.12
          Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.960652   14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
          Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.966460   14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
          Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.016605   14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.59.11:8080/api/v1/services?limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
          Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.963891   14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
          

          这里出现访问失败的错误,需要测试一下接口:
          https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2

      • http 测试接口:出现ssl错误,即认证问题,改为http,测试

        • http https://192.168.59.11:8080/…

          root@node2:/etc/systemd/system# http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0
          [1] 22638
          [2] 22639
          root@node2:/etc/systemd/system# 
          http: error: SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
          [1]-  Exit 1                  http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2
          [2]+  Done                    limit=500
          root@node2:/etc/systemd/system# 
        • http http://192.168.59.11:8080/… ,得到了相应

          root@node2:/etc/systemd/system# http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0
          [1] 22833
          [2] 22834
          root@node2:/etc/systemd/system# HTTP/1.1 200 OK
          Content-Type: application/json
          Date: Sun, 26 Aug 2018 03:53:20 GMT
          Transfer-Encoding: chunked
          {
          "apiVersion": "v1", 
          "items": [
          {
          "metadata": {
          "annotations": {"node.alpha.kubernetes.io/ttl": "0", "volumes.kubernetes.io/controller-managed-attach-detach": "true"
          }, 
          "creationTimestamp": "2018-08-26T02:33:27Z", 
          "labels": {"beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "node2"
          }, 
          "name": "node2", 
          "resourceVersion": "15170", 
          "selfLink": "/api/v1/nodes/node2", 
          "uid": "69b8f2ca-a8d8-11e8-a889-02483e15b50c"
          }, 
          "spec": {}, 
          "status": {
          "addresses": [{"address": "192.168.59.12", "type": "InternalIP"}, {"address": "node2", "type": "Hostname"}
          ], 
          "allocatable": {"cpu": "1", "ephemeral-storage": "9306748094", "hugepages-2Mi": "0", "memory": "1945760Ki", "pods": "110"
          }, 
          "capacity": {"cpu": "1", "ephemeral-storage": "10098468Ki", "hugepages-2Mi": "0", "memory": "2048160Ki", "pods": "110"
          }, 
          "conditions": [{"lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk"}, {"lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure"}, {"lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure"}, {"lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T02:33:27Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure"}, {"lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:21Z", "message": "kubelet is posting ready status. AppArmor enabled", "reason": "KubeletReady", "status": "True", "type": "Ready"}
          ], 
          "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}
          }, 
          "nodeInfo": {"architecture": "amd64", "bootID": "f4cb0a01-e5b9-4851-83d9-ea6556bd285e", "containerRuntimeVersion": "docker://17.3.2", "kernelVersion": "4.4.0-133-generic", "kubeProxyVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2", "kubeletVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2", "machineID": "fe02b8afeb1041cfa61a6b1d40371316", "operatingSystem": "linux", "osImage": "Ubuntu 16.04.5 LTS", "systemUUID": "98A4443F-059B-462C-900A-AFA32971670D"
          }
          }
          }
          ], 
          "kind": "NodeList", 
          "metadata": {
          "resourceVersion": "15179", 
          "selfLink": "/api/v1/nodes"
          }
          }
          [1]-  Done                    http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2
          [2]+  Done                    limit=500
          root@node2:/etc/systemd/system# 
        • master上测试kubectl get node,查到资源

          root@node1:/etc/kubernetes# kubectl get nodes
          NAME      STATUS    ROLES     AGE       VERSION
          node2     Ready     <none>    1h        v1.11.3-beta.0.3+798ca4d3ceb5b2
          综上,因为k8s版本的变化,启动参数变化,按照旧版资料搭建集群,会出现一  
          些问题,这里解决的就是kubelet的--api-server变化.建议,搭建过程参考  
          经典资料,但是出错后,一定查看对应版本的官网文档.
          
  • 接下来

    • 在集群上运行demo
    • 加上认证机制