Kubernetes集群搭建

环境说明

操作系统:CentOS7.4 64bit

软件版本:kubernetes-v1.9.9、etcd-v3.3.8、flannel-v0.10.0

下载地址:

  1. https://dl.k8s.io/v1.9.9/kubernetes-server-linux-amd64.tar.gz
  2. https://dl.k8s.io/v1.9.9/kubernetes-node-linux-amd64.tar.gz
  3. https://github.com/coreos/etcd/releases/download/v3.3.8/etcd-v3.3.8-linux-amd64.tar.gz
  4. https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

节点分布:

  • 10.0.0.2 Master (kube-apiserver,kube-controller-manager,kube-scheduler,flannel)
  • 10.0.0.3 Node (kubelet,kube-proxy,etcd,flannel)
  • 10.0.0.4 Node (kubelet,kube-proxy,etcd,flannel)
  • 10.0.0.5 Node (kubelet,kube-proxy,etcd,flannel)

注意:为了能通过k8s-master:8080 proxy api方式k8s dashboard,Master也安装flannel。

证书生成

kubernetes系统的各组件需要使用 TLS 证书对通信进行加密,本文档使用CloudFlare的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书;

生成的 CA 证书和秘钥文件如下:

  • k8s-root-ca-key.pem
  • k8s-root-ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

使用证书的组件如下:

  • etcd:使用 k8s-root-ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kube-apiserver:使用 k8s-root-ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kubelet:使用k8s-root-ca.pem;
  • kube-proxy:使用k8s-root-ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  • kubectl:使用k8s-root-ca.pem、admin-key.pem、admin.pem;

SSL工具下载:

  1. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
  2. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
  3. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
  4. chmod +x /usr/local/bin/{cfssl,cfssljson,cfssl-certinfo}

创建kubernetes目录用户存放证书和配置文件:

  1. mkdir -p /etc/kubernetes/ssl

注意:之后证书相关目录都在/etc/kubernetes/ssl

在Master节点上创建cfssl所需的json文件:

admin-csr.json

  1. {
  2.   “CN”: “admin”,
  3.   “hosts”: [],
  4.   “key”: {
  5.     “algo”: “rsa”,
  6.     “size”: 2048
  7.   },
  8.   “names”: [
  9.     {
  10.       “C”: “CN”,
  11.       “ST”: “ShangHai”,
  12.       “L”: “ShangHai”,
  13.       “O”: “system:masters”,
  14.       “OU”: “System”
  15.     }
  16.   ]
  17. }

k8s-gencert.json

  1. {
  2.   “signing”: {
  3.     “default”: {
  4.       “expiry”: “87600h”
  5.     },
  6.     “profiles”: {
  7.       “kubernetes”: {
  8.         “usages”: [
  9.             “signing”,
  10.             “key encipherment”,
  11.             “server auth”,
  12.             “client auth”
  13.         ],
  14.         “expiry”: “87600h”
  15.       }
  16.     }
  17.   }
  18. }

k8s-root-ca-csr.json

  1. {
  2.   “CN”: “kubernetes”,
  3.   “key”: {
  4.     “algo”: “rsa”,
  5.     “size”: 2048
  6.   },
  7.   “names”: [
  8.     {
  9.       “C”: “CN”,
  10.       “ST”: “ShangHai”,
  11.       “L”: “ShangHai”,
  12.       “O”: “k8s”,
  13.       “OU”: “System”
  14.     }
  15.   ]
  16. }

kube-proxy-csr.json

  1. {
  2.   “CN”: “system:kube-proxy”,
  3.   “hosts”: [],
  4.   “key”: {
  5.     “algo”: “rsa”,
  6.     “size”: 2048
  7.   },
  8.   “names”: [
  9.     {
  10.       “C”: “CN”,
  11.       “ST”: “ShangHai”,
  12.       “L”: “ShangHai”,
  13.       “O”: “k8s”,
  14.       “OU”: “System”
  15.     }
  16.   ]
  17. }

kubernetes-csr.json

172.31.0.1为内部services api地址;172.31.0.2为内部services dns地址,必须在私有网络范围内,一般选择B类:172.16.0.0~172.31.255.255

172.31.0.0/24网段也可预留多个地址

  1. {
  2.     “CN”: “kubernetes”,
  3.     “hosts”: [
  4.       “127.0.0.1”,
  5.       “172.31.0.1”,
  6.       “172.31.0.2”,
  7.       “10.0.0.2”,
  8.       “10.0.0.3”,
  9.       “10.0.0.4”,
  10.       “10.0.0.5”,
  11.       “kubernetes”,
  12.       “kubernetes.default”,
  13.       “kubernetes.default.svc”,
  14.       “kubernetes.default.svc.cluster”,
  15.       “kubernetes.default.svc.cluster.local”
  16.     ],
  17.     “key”: {
  18.         “algo”: “rsa”,
  19.         “size”: 2048
  20.     },
  21.     “names”: [
  22.         {
  23.             “C”: “CN”,
  24.             “ST”: “ShangHai”,
  25.             “L”: “ShangHai”,
  26.             “O”: “k8s”,
  27.             “OU”: “System”
  28.         }
  29.     ]
  30. }

生成CA证书和私钥

  1. cd /etc/kubernetes/ssl
  2. cfssl gencert -initca k8s-root-ca-csr.json | cfssljson -bare k8s-root-ca

生成kubernetes,admin,kube-proxy证书和私钥

  1. cd /etc/kubernetes/ssl
  2. for targetName in kubernetes admin kube-proxy; do cfssl gencert –ca k8s-root-ca.pem –ca-key k8s-root-ca-key.pem –config k8s-gencert.json —profile kubernetes $targetName-csr.json | cfssljson –bare $targetName; done

分发证书和私钥至所有服务器

  1. scp *.pem 10.0.0.3:/etc/kubernetes/ssl
  2. scp *.pem 10.0.0.4:/etc/kubernetes/ssl
  3. scp *.pem 10.0.0.5:/etc/kubernetes/ssl

etcd集群搭建

hosts绑定,/etc/hosts

  1. 10.0.0.3 etcd01
  2. 10.0.0.4 etcd02
  3. 10.0.0.5 etcd03

将etcd-v3.3.8-linux-amd64.tar.gz解压到三台服务器/usr/local/bin/{etcd,etcdctl}

三台服务器创建etcd数据目录:

  1. mkdir -p /data/etcd

10.0.0.3:/usr/lib/systemd/system/etcd.service

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. Documentation=https://github.com/coreos
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/data/etcd/
  10. EnvironmentFile=-/etc/kubernetes/etcd.conf
  11. ExecStart=/usr/local/bin/etcd \
  12. –name=etcd01 \
  13. –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  14. –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  15. –peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  16. –peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  17. –trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  18. –peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  19. –initial-advertise-peer-urls=https://10.0.0.3:2380 \
  20. –listen-peer-urls=https://10.0.0.3:2380 \
  21. –listen-client-urls=https://10.0.0.3:2379,http://127.0.0.1:2379 \
  22. –advertise-client-urls=https://10.0.0.3:2379 \
  23. –initial-cluster-token=etcd-cluster-0 \
  24. –initial-cluster=etcd01=https://10.0.0.3:2380,etcd02=https://10.0.0.4:2380,etcd03=https://10.0.0.5:2380 \
  25. –initial-cluster-state=new \
  26. –data-dir=/data/etcd
  27. Restart=on-failure
  28. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  29. RestartSec=5

10.0.0.4:/usr/lib/systemd/system/etcd.service

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. Documentation=https://github.com/coreos
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/data/etcd/
  10. EnvironmentFile=-/etc/kubernetes/etcd.conf
  11. ExecStart=/usr/local/bin/etcd \
  12. –name=etcd02 \
  13. –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  14. –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  15. –peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  16. –peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  17. –trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  18. –peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  19. –initial-advertise-peer-urls=https://10.0.0.4:2380 \
  20. –listen-peer-urls=https://10.0.0.4:2380 \
  21. –listen-client-urls=https://10.0.0.4:2379,http://127.0.0.1:2379 \
  22. –advertise-client-urls=https://10.0.0.4:2379 \
  23. –initial-cluster-token=etcd-cluster-0 \
  24. –initial-cluster=etcd01=https://10.0.0.3:2380,etcd02=https://10.0.0.4:2380,etcd03=https://10.0.0.5:2380 \
  25. –initial-cluster-state=new \
  26. –data-dir=/data/etcd
  27. Restart=on-failure
  28. RestartSec=5
  29. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  30. [Install]
  31. WantedBy=multi-user.target

10.0.0.5:/usr/lib/systemd/system/etcd.service

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. Documentation=https://github.com/coreos
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/data/etcd/
  10. EnvironmentFile=-/etc/kubernetes/etcd.conf
  11. ExecStart=/usr/local/bin/etcd \
  12. –name=etcd03 \
  13. –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  14. –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  15. –peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  16. –peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  17. –trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  18. –peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  19. –initial-advertise-peer-urls=https://10.0.0.5:2380 \
  20. –listen-peer-urls=https://10.0.0.5:2380 \
  21. –listen-client-urls=https://10.0.0.5:2379,http://127.0.0.1:2379 \
  22. –advertise-client-urls=https://10.0.0.5:2379 \
  23. –initial-cluster-token=etcd-cluster-0 \
  24. –initial-cluster=etcd01=https://10.0.0.3:2380,etcd02=https://10.0.0.4:2380,etcd03=https://10.0.0.5:2380 \
  25. –initial-cluster-state=new \
  26. –data-dir=/data/etcd
  27. Restart=on-failure
  28. RestartSec=5
  29. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  30. [Install]
  31. WantedBy=multi-user.target

三台服务器分别启动命令

  1. systemctl enable etcd
  2. systemctl start etcd

验证健康状态

  1. etcdctl  –ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  2.   –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  3.   –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  4.   cluster-health

显示cluster is healthy即etcd集群正常

flannel网络配置

将flannel-v0.10.0-linux-amd64.tar.gz解压到4台(即所有节点,包括Master、Node)服务器/usr/local/bin/{flanneld,mk-docker-opts.sh}

flanneld配置文件/etc/kubernetes/flannel

  1. # Flanneld configuration options  
  2. # etcd url location.  Point this to the server where etcd runs
  3. ETCD_ENDPOINTS=”https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379″
  4. # etcd config key.  This is the configuration key that flannel queries
  5. # For address range assignment
  6. ETCD_PREFIX=”/kubernetes/network”
  7. # Any additional options that you want to pass
  8. FLANNEL_OPTIONS=”-etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem -iface=eth0″

flanneld启动脚本/usr/lib/systemd/system/flanneld.service

  1. [Unit]
  2. Description=Flanneld overlay address etcd agent
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. After=etcd.service
  7. Before=docker.service
  8. [Service]
  9. Type=notify
  10. EnvironmentFile=/etc/kubernetes/flannel
  11. EnvironmentFile=-/etc/kubernetes/docker-network
  12. ExecStart=/usr/local/bin/flanneld \
  13.   -etcd-endpoints=${ETCD_ENDPOINTS} \
  14.   -etcd-prefix=${ETCD_PREFIX} \
  15.   $FLANNEL_OPTIONS
  16. ExecStartPost=/usr/local/bin/mk-docker-opts.sh -d /run/flannel/docker
  17. Restart=on-failure
  18. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  19. [Install]
  20. WantedBy=multi-user.target
  21. RequiredBy=docker.service

在etcd中创建网络配置

  1. etcdctl –endpoints=https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379 \
  2.   –ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  3.   –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  4.   –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  5.   mkdir /kubernetes/network
  6. etcdctl –endpoints=https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379 \
  7.   –ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  8.   –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  9.   –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  10.   mk /kubernetes/network/config ‘{“Network”:”172.30.0.0/16″,”SubnetLen”:24,”Backend”:{ “Type”: “vxlan”, “VNI”: 1 }}’

flannel启动命令

  1. systemctl enable flanneld
  2. systemctl start flanneld

验证etcd中创建的网络配置

  1. etcdctl –endpoints=https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379 \
  2.   –ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  3.   –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  4.   –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  5.   get /kubernetes/network/config
  6. etcdctl –endpoints=https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379 \
  7.   –ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  8.   –cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  9.   –key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  10.   ls /kubernetes/network/subnets

docker-ce安装

注意Node节点安装docker,hub.linuxeye.com为内网私有参考,建议使用vmware harhor

  1. wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  2. yum -y install docker-ce

修改/usr/lib/systemd/system/docker.service

  1. [Unit]
  2. Description=Docker Application Container Engine
  3. Documentation=https://docs.docker.com
  4. After=network-online.target firewalld.service
  5. Wants=network-online.target
  6. [Service]
  7. Type=notify
  8. # the default is not to use systemd for cgroups because the delegate issues still
  9. # exists and systemd currently does not support the cgroup feature set required
  10. # for containers run by docker
  11. EnvironmentFile=-/run/flannel/docker
  12. ExecStart=/usr/bin/dockerd –insecure-registry hub.linuxeye.com –data-root=/data/docker –log-opt max-size=1024m –log-opt maxfile=10 $DOCKER_OPTS
  13. ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
  14. ExecReload=/bin/kill -s HUP $MAINPID
  15. # Having non-zero Limit*s causes performance problems due to accounting overhead
  16. # in the kernel. We recommend using cgroups to do container-local accounting.
  17. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  18. # Uncomment TasksMax if your systemd version supports it.
  19. # Only systemd 226 and above support this version.
  20. #TasksMax=infinity
  21. TimeoutStartSec=0
  22. # set delegate yes so that systemd does not reset the cgroups of docker containers
  23. Delegate=yes
  24. # kill only the docker process, not all processes in the cgroup
  25. KillMode=process
  26. # restart the docker process if it exits prematurely
  27. Restart=on-failure
  28. StartLimitBurst=3
  29. StartLimitInterval=60s
  30. [Install]
  31. WantedBy=multi-user.target

修改docker.service后执行

  1. systemctl daemon-reload
  2. systemctl start docker

注意:docker在flannel之后启动

Kubeconfig生成

在10.0.0.2 Master节点上将kubernetes-server-linux-amd64.tar.gz解压中文件放到/usr/local/bin

kubeconfig文件记录k8s集群的各种信息,对集群构建非常重要。

  • kubectl命令行工具从~/.kube/config,即kubectl的kubeconfig文件中获取访问kube-apiserver的地址,证书和用户名等信息
  • kubelet/kube-proxy等在Node上的程序进程同样通过bootstrap.kubeconfig和kube-proxy.kubeconfig上提供的认证与授权信息与Master进行通讯

请在Master上执行,即10.0.0.2

Kubectl kubeconfig

生成的文件在~/.kube/config下

声明kube apiserver

  1. export KUBE_APISERVER=”https://10.0.0.2:6443″

设置集群参数

  1. kubectl config set-cluster kubernetes \
  2.   –certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  3.   –embed-certs=true \
  4.   –server=${KUBE_APISERVER}

设置客户端认证参数

  1. kubectl config set-credentials admin \
  2.   –client-certificate=/etc/kubernetes/ssl/admin.pem \
  3.   –embed-certs=true \
  4.   –client-key=/etc/kubernetes/ssl/admin-key.pem

设置上下文参数

  1. kubectl config set-context kubernetes \
  2.   –cluster=kubernetes \
  3.   —user=admin

设置默认上下文

  1. kubectl config use-context kubernetes

Kubelet kubeconfig

声明kube apiserver

  1. export KUBE_APISERVER=”https://10.0.0.2:6443″

设置集群参数

  1. cd /etc/kubernetes
  2. kubectl config set-cluster kubernetes \
  3.   –certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  4.   –embed-certs=true \
  5.   –server=${KUBE_APISERVER} \
  6.   –kubeconfig=bootstrap.kubeconfig

设置客户端认证参数

  1. cd /etc/kubernetes
  2. export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)
  3. kubectl config set-credentials kubelet-bootstrap \
  4.   —token=${BOOTSTRAP_TOKEN} \
  5.   –kubeconfig=bootstrap.kubeconfig

设置上下文参数

  1. cd /etc/kubernetes
  2. kubectl config set-context default \
  3.   –cluster=kubernetes \
  4.   —user=kubelet-bootstrap \
  5.   –kubeconfig=bootstrap.kubeconfig

设置默认上下文

  1. cd /etc/kubernetes
  2. kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

Kube-proxy kubeconfig

生成Token认证文件

  1. cd /etc/kubernetes
  2. cat > token.csv <<EOF
  3. ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,”system:kubelet-bootstrap”
  4. EOF

声明kube apiserver

  1. export KUBE_APISERVER=”https://10.0.0.2:6443″

设置集群参数

  1. cd /etc/kubernetes
  2. kubectl config set-cluster kubernetes \
  3.   –certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  4.   –embed-certs=true \
  5.   –server=${KUBE_APISERVER} \
  6.   –kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

  1. cd /etc/kubernetes
  2. kubectl config set-credentials kube-proxy \
  3.   –client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  4.   –client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  5.   –embed-certs=true \
  6.   –kubeconfig=kube-proxy.kubeconfig

设置上下文参数

  1. cd /etc/kubernetes
  2. kubectl config set-context default \
  3.   –cluster=kubernetes \
  4.   —user=kube-proxy \
  5.   –kubeconfig=kube-proxy.kubeconfig

设置默认上下文

  1. cd /etc/kubernetes
  2. kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

分发kubeconfig文件

将生成的kubeconfig分发到所有Node机器的/etc/kubernetes

  1. scp ~/.kube/config 10.0.0.3:/etc/kubernetes/kubelet.kubeconfig
  2. scp /etc/kubernetes/bootstrap.kubeconfig 10.0.0.3:/etc/kubernetes/bootstrap.kubeconfig
  3. scp /etc/kubernetes/kube-proxy.kubeconfig 10.0.0.3:/etc/kubernetes/kube-proxy.kubeconfig
  4. scp ~/.kube/config 10.0.0.4:/etc/kubernetes/kubeconfig
  5. scp /etc/kubernetes/bootstrap.kubeconfig 10.0.0.4:/etc/kubernetes/bootstrap.kubeconfig
  6. scp /etc/kubernetes/kube-proxy.kubeconfig 10.0.0.4:/etc/kubernetes/kube-proxy.kubeconfig
  7. scp ~/.kube/config 10.0.0.5:/etc/kubernetes/kubeconfig
  8. scp /etc/kubernetes/bootstrap.kubeconfig 10.0.0.5:/etc/kubernetes/bootstrap.kubeconfig
  9. scp /etc/kubernetes/kube-proxy.kubeconfig 10.0.0.5:/etc/kubernetes/kube-proxy.kubeconfig

Master搭建

通用配置文件/etc/kubernetes/config

  1. ###
  2. # kubernetes system config
  3. #
  4. # The following values are used to configure various aspects of all
  5. # kubernetes services, including
  6. #
  7. #   kube-apiserver.service
  8. #   kube-controller-manager.service
  9. #   kube-scheduler.service
  10. #   kubelet.service
  11. #   kube-proxy.service
  12. # logging to stderr means we get it in the systemd journal
  13. KUBE_LOGTOSTDERR=”–logtostderr=true”
  14. # journal message level, 0 is debug
  15. KUBE_LOG_LEVEL=”–v=0″
  16. # Should this cluster be allowed to run privileged docker containers
  17. KUBE_ALLOW_PRIV=”–allow-privileged=true”
  18. # How the controller-manager, scheduler, and proxy find the apiserver
  19. KUBE_MASTER=”–master=http://10.0.0.2:8080″

kube apiserver配置文件/etc/kubernetes/kube-apiserver

  1. ###
  2. ## kubernetes system config
  3. ##
  4. ## The following values are used to configure the kube-apiserver
  5. ##
  6. #
  7. ## The address on the local server to listen to.
  8. KUBE_API_ADDRESS=”–advertise-address=10.0.0.2 –bind-address=10.0.0.2 –insecure-bind-address=10.0.0.2″
  9. #
  10. ## The port on the local server to listen on.
  11. #KUBE_API_PORT=”–port=8080″
  12. #
  13. ## Port minions listen on
  14. #KUBELET_PORT=”–kubelet-port=10250″
  15. #
  16. ## Comma separated list of nodes in the etcd cluster
  17. KUBE_ETCD_SERVERS=”–etcd-servers=https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379″
  18. #
  19. ## Address range to use for services
  20. KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=172.31.0.0/16″
  21. #
  22. ## default admission control policies
  23. #KUBE_ADMISSION_CONTROL=”–admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota”
  24. KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction”
  25. #
  26. ## Add your own!
  27. KUBE_API_ARGS=” –enable-bootstrap-token-auth \
  28.                 –authorization-mode=RBAC,Node \
  29.                 –runtime-config=rbac.authorization.k8s.io/v1 \
  30.                 –kubelet-https=true \
  31.                 –service-node-port-range=30000-65000 \
  32.                 —token-auth-file=/etc/kubernetes/token.csv \
  33.                 –tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  34.                 –tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  35.                 –client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  36.                 –service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
  37.                 –etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
  38.                 –etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  39.                 –etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  40.                 –enable-swagger-ui=true \
  41.                 –apiserver-count=3 \
  42.                 –audit-log-maxage=30 \
  43.                 –audit-log-maxbackup=3 \
  44.                 –audit-log-maxsize=100 \
  45.                 –audit-log-path=/var/lib/audit.log \
  46.                 –event-ttl=1h”

kube apiserver启动脚本/usr/lib/systemd/system/kube-apiserver.service

  1. [Unit]
  2. Description=Kubernetes API Service
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=network.target
  5. After=etcd.service
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/config
  8. EnvironmentFile=-/etc/kubernetes/kube-apiserver
  9. ExecStart=/usr/local/bin/kube-apiserver \
  10.         $KUBE_LOGTOSTDERR \
  11.         $KUBE_LOG_LEVEL \
  12.         $KUBE_ETCD_SERVERS \
  13.         $KUBE_API_ADDRESS \
  14.         $KUBE_API_PORT \
  15.         $KUBELET_PORT \
  16.         $KUBE_ALLOW_PRIV \
  17.         $KUBE_SERVICE_ADDRESSES \
  18.         $KUBE_ADMISSION_CONTROL \
  19.         $KUBE_API_ARGS
  20. Restart=on-failure
  21. RestartSec=15
  22. Type=notify
  23. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  24. [Install]
  25. WantedBy=multi-user.target

kube-controller-manager配置文件/etc/kubernetes/kube-controller-manager

  1. ###
  2. # The following values are used to configure the kubernetes controller-manager
  3. # defaults from config and apiserver should be adequate
  4. # Add your own!
  5. KUBE_CONTROLLER_MANAGER_ARGS=”–address=10.0.0.2 \
  6.                               –service-cluster-ip-range=172.31.0.0/16 \
  7.                               –cluster-cidr=172.30.0.0/16 \
  8.                               –allocate-node-cidrs=true \
  9.                               –cluster-name=kubernetes \
  10.                               –cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  11.                               –cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem  \
  12.                               –service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
  13.                               –root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  14.                               –leader-elect=true \
  15.                               –v=2 \
    –horizontal-pod-autoscaler-use-rest-clients=false”

kube-controller-manager启动脚本/usr/lib/systemd/system/kube-controller-manager.service

  1. [Unit]
  2. Description=Kubernetes Controller Manager
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. [Service]
  5. EnvironmentFile=-/etc/kubernetes/config
  6. EnvironmentFile=-/etc/kubernetes/kube-controller-manager
  7. ExecStart=/usr/local/bin/kube-controller-manager \
  8.         $KUBE_LOGTOSTDERR \
  9.         $KUBE_LOG_LEVEL \
  10.         $KUBE_MASTER \
  11.         $KUBE_CONTROLLER_MANAGER_ARGS
  12. Restart=on-failure
  13. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  14. [Install]
  15. WantedBy=multi-user.target

kube-scheduler配置文件/etc/kubernetes/kube-scheduler

  1. ###
  2. # kubernetes scheduler config
  3. # default config should be adequate
  4. # Add your own!
  5. KUBE_SCHEDULER_ARGS=”–leader-elect=true \
  6.                      –address=10.0.0.2″

kube-scheduler启动脚本/usr/lib/systemd/system/kube-scheduler.service

  1. [Unit]
  2. Description=Kubernetes Scheduler Plugin
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. [Service]
  5. EnvironmentFile=-/etc/kubernetes/config
  6. EnvironmentFile=-/etc/kubernetes/kube-scheduler
  7. ExecStart=/usr/local/bin/kube-scheduler \
  8.             $KUBE_LOGTOSTDERR \
  9.             $KUBE_LOG_LEVEL \
  10.             $KUBE_MASTER \
  11.             $KUBE_SCHEDULER_ARGS
  12. Restart=on-failure
  13. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  14. [Install]
  15. WantedBy=multi-user.target

启用服务

  1. systemctl enable kube-apiserver
  2. systemctl enable kube-controller-manager
  3. systemctl enable kube-scheduler
  4. systemctl start kube-apiserver
  5. systemctl start kube-controller-manager
  6. systemctl start kube-scheduler

Node搭建

kubelet

创建bootstrap 角色及绑定(10.0.0.2 Master上执行)

  1. kubectl create clusterrolebinding kubelet-bootstrap \
  2.   –clusterrole=system:node-bootstrapper \
  3.   —user=kubelet-bootstrap

注意:之后操作在所有Node节点执行

通用配置文件/etc/kubernetes/config

  1. ###
  2. # kubernetes system config
  3. #
  4. # The following values are used to configure various aspects of all
  5. # kubernetes services, including
  6. #
  7. #   kube-apiserver.service
  8. #   kube-controller-manager.service
  9. #   kube-scheduler.service
  10. #   kubelet.service
  11. #   kube-proxy.service
  12. # logging to stderr means we get it in the systemd journal
  13. KUBE_LOGTOSTDERR=”–logtostderr=true”
  14. # journal message level, 0 is debug
  15. KUBE_LOG_LEVEL=”–v=0″
  16. # Should this cluster be allowed to run privileged docker containers
  17. KUBE_ALLOW_PRIV=”–allow-privileged=true”
  18. # How the controller-manager, scheduler, and proxy find the apiserver
  19. KUBE_MASTER=”–master=http://10.0.0.2:8080″

关闭SWAP

  1. swapoff -a #不关闭kubelet启动会报错

创建存放kubelet数据目录

  1. mkdir -p /data/kubelet

kubelet配置文件/etc/kubernetes/kubelet

  1. ###
  2. ## kubernetes kubelet (minion) config
  3. #
  4. ## The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)
  5. KUBELET_ADDRESS=”–address=10.0.0.3″
  6. #
  7. ## The port for the info server to serve on
  8. #
  9. ## You may leave this blank to use the actual hostname
  10. KUBELET_HOSTNAME=”–hostname-override=10.0.0.3″
  11. #
  12. ## pod infrastructure container
  13. KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=hub.linuxeye.com/rhel7/pod-infrastructure:latest”
  14. #
  15. ## Add your own!
  16. KUBELET_ARGS=”–cluster-dns=172.31.0.2 \
  17.               –bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
  18.               –kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  19.               –require-kubeconfig \
  20.               –cert-dir=/etc/kubernetes/ssl \
  21.               –cluster-domain=cluster.local \
  22.               –hairpin-mode promiscuous-bridge \
  23.               –serialize-image-pulls=false \
  24.               –container-runtime=docker \
  25.               –register-node \
  26.               –tls-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  27.               –tls-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
  28.               –root-dir=/data/kubelet”

kubelet启动脚本/usr/lib/systemd/system/kubelet.service

  1. [Unit]
  2. Description=Kubernetes Kubelet Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=docker.service
  5. Requires=docker.service
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/config
  8. EnvironmentFile=-/etc/kubernetes/kubelet
  9. ExecStart=/usr/local/binkubelet \
  10.             $KUBE_LOGTOSTDERR \
  11.             $KUBE_LOG_LEVEL \
  12.             $KUBELET_API_SERVER \
  13.             $KUBELET_ADDRESS \
  14.             $KUBELET_PORT \
  15.             $KUBELET_HOSTNAME \
  16.             $KUBE_ALLOW_PRIV \
  17.             $KUBELET_POD_INFRA_CONTAINER \
  18.             $KUBELET_ARGS
  19. Restart=on-failure
  20. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  21. [Install]
  22. WantedBy=multi-user.target

kubelet启动命令

  1. systemctl enable kubelet
  2. systemctl start kubelet

kube-proxy

安装依赖包

  1. yum install -y conntrack-tools

kube-proxy配置文件/etc/kubernetes/proxy

  1. ###
  2. # kubernetes proxy config
  3. # default config should be adequate
  4. # Add your own!
  5. KUBE_PROXY_ARGS=”–bind-address=10.0.0.3 \
  6.                  –hostname-override=10.0.0.3 \
  7.                  –kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  8.                  –cluster-cidr=172.31.0.0/16″

kube-proxy启动脚本/usr/lib/systemd/system/kube-proxy.service

  1. [Unit]
  2. Description=Kubernetes Kube-Proxy Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=network.target
  5. [Service]
  6. EnvironmentFile=-/etc/kubernetes/config
  7. EnvironmentFile=-/etc/kubernetes/proxy
  8. ExecStart=/usr/local/bin/kube-proxy \
  9.         $KUBE_LOGTOSTDERR \
  10.         $KUBE_LOG_LEVEL \
  11.         $KUBE_MASTER \
  12.         $KUBE_PROXY_ARGS
  13. Restart=on-failure
  14. LimitNOFILE=1000000
    LimitNPROC=1000000
    LimitCORE=1000000
  15. [Install]
  16. WantedBy=multi-user.target

kube-proxy启动命令

  1. systemctl enable kube-proxy
  2. systemctl start kube-proxy

验证Node是否正常

在10.0.0.2 Master上运行,状态为Ready即正常,否则请检查整个安装过程

  1. kubectl get nodes
  2. NAME         STATUS    ROLES     AGE       VERSION
  3. 10.0.0.0.2   Ready     <none>    1m        v1.9.9

附件组件

请在10.0.0.2 Master节点进行以下操作

新增coredns

  1. wget https://github.com/coredns/deployment/raw/master/kubernetes/coredns.yaml.sed
  2. wget https://github.com/coredns/deployment/raw/master/kubernetes/deploy.sh
  3. ./deploy.sh -i 172.31.0.2 | kubectl apply -f –

新增heapster

  1. wget https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/rbac/heapster-rbac.yaml
  2. wget https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/grafana.yaml
  3. wget https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/heapster.yaml
  4. wget https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/influxdb.yaml
  5. kubectl apply -f heapster-rbac.yaml
  6. kubectl apply -f grafana.yaml
  7. kubectl apply -f heapster.yaml
  8. kubectl apply -f influxdb.yaml

新增kube-state-metrics

yaml文件参考:https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes

kube-state-metrics.yaml

  1. kubectl apply -f kube-state-metrics.yaml

kube-state-metrics-deploy.yaml

  1. kubectl apply -f kube-state-metrics-deploy.yaml

DashBoard搭建

dashboard yaml文件可参考:

https://github.com/kubernetes/dashboard/blob/master/src/deploy/recommended/kubernetes-dashboard.yaml

特别注意cert证书、docker image地址,以下yaml是去掉证书部分。

dashboard-rbac.yaml

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4.   name: dashboard
  5.   namespace: kube-system
  6. kind: ClusterRoleBinding
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. metadata:
  9.   name: dashboard
  10. subjects:
  11.   – kind: ServiceAccount
  12.     name: dashboard
  13.     namespace: kube-system
  14. roleRef:
  15.   kind: ClusterRole
  16.   name: cluster-admin
  17.   apiGroup: rbac.authorization.k8s.io

dashboard-service.yaml

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4.   name: kubernetes-dashboard
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: kubernetes-dashboard
  8.     kubernetes.io/cluster-service: “true”
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10. spec:
  11.   selector:
  12.     k8s-app: kubernetes-dashboard
  13.   ports:
  14.   – port: 80
  15.     targetPort: 9090

dashboard-controller.yaml

  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4.   name: kubernetes-dashboard
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: kubernetes-dashboard
  8.     kubernetes.io/cluster-service: “true”
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10. spec:
  11.   selector:
  12.     matchLabels:
  13.       k8s-app: kubernetes-dashboard
  14.   template:
  15.     metadata:
  16.       labels:
  17.         k8s-app: kubernetes-dashboard
  18.       annotations:
  19.         scheduler.alpha.kubernetes.io/critical-pod: ”
  20.     spec:
  21.       serviceAccountName: dashboard
  22.       containers:
  23.       – name: kubernetes-dashboard
  24.         image: hub.linuxeye.com/google_containers/kubernetes-dashboard-amd64:v1.8.3
  25.         resources:
  26.           limits:
  27.             cpu: 1000m
  28.             memory: 2000Mi
  29.           requests:
  30.             cpu: 1000m
  31.             memory: 2000Mi
  32.         ports:
  33.         – containerPort: 9090
  34.         livenessProbe:
  35.           httpGet:
  36.             path: /
  37.             port: 9090
  38.           initialDelaySeconds: 30
  39.           timeoutSeconds: 30
  40.       tolerations:
  41.       – key: “CriticalAddonsOnly”
  42.         operator: “Exists”

应用dashboard yaml文件

  1. kubectl apply -f dashboard-rbac.yaml
  2. kubectl apply -f dashboard-service.yaml
  3. kubectl apply -f dashboard-controller.yaml

验证dashboard运行状况

  1. kubectl get pods -n kube-system -o wide
  2. NAME                                    READY     STATUS    RESTARTS   AGE       IP           NODE
  3. kube-state-metrics-7859f12bfb-k22m9     2/2       Running   0          19h       172.7.17.2   10.0.0.2
  4. kubernetes-dashboard-745998bbd4-n9dfd   1/1       Running   0          18h       172.7.17.3   10.0.0.2

一直Running状态即正常,否则可能有问题,可用如下命令查看报错:

  1. kubectl logs kubernetes-dashboard-745998bbd4-n9dfd -n kube-system

最后来一张kubernetes-dashboard UI

通过Master api proxy访问UI,即http://10.0.0.2:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

Kubernetes集群搭建

Sat Apr 7 10:07:14 CST 2018


【AD】美国洛杉矶CN2 VPS/香港CN2 VPS/日本CN2 VPS推荐,延迟低、稳定性高、免费备份_搬瓦工vps

【AD】RackNerd 推出的 KVM VPS 特价优惠,在纽约、西雅图、圣何塞和阿什本每年仅需 12.88 美元!