前言前端
關於kubernetes HA集羣部署的方式有不少種(這裏的HA指的是master apiserver的高可用),好比經過keepalived vip漂移的方式、haproxy/nginx負載均衡實現的高可用等。我這裏一系列的部署都是經過haproxy 和 nginx 負載均衡的方式去實現集羣的部署,而且因爲如今k8s使用的用戶愈來愈多,因此網上有不少類似的解決方案。若是本篇文章涉及的抄襲,能夠聯繫我。node
1、環境準備linux
1.1 主機環境nginx
IP地址 主機名 角色 備註
192.168.15.131 k8s-master01 k8s-master/etcd_cluster01
192.168.15.132 k8s-master02 k8s-master/etcd_cluster01
192.168.15.133 k8s-master03 k8s-master/etcd_cluster01
192.168.15.134 k8s-node01 k8s-node
192.168.15.135 k8s-node02 k8s-nodegit
提示:這樣命名主要是由於部署k8s集羣,整個etcd也是給k8s提供使用; github
1.2 相關軟件版本docker
docker 1.7-ce kubernetes-1.7.3
安裝docker-cejson
# 卸載舊版本docker yum remove docker docker-common docker-selinux docker-engine # 安裝docker-ce yum makecache yum install -y yum-utils # 配置docker-ce yum源 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce -y # docker 加速器 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://jek8a03u.mirror.aliyuncs.com"] } EOF # 啓動docker systemctl start docker systemctl enable docker
1.3 更改系統環境bootstrap
# 更改主機名(略)vim
# 更改hosts文件
127.0.0.1 k8s-master01 ::1 k8s-master01 192.168.15.131 k8s-master01 192.168.15.132 k8s-master02 192.168.15.133 k8s-master03 192.168.15.134 k8s-node01 192.168.15.135 k8s-node02
# 禁止selinux以及防火牆
setenforce 0 systemctl stop firewalld systemctl disable firewalld
# 安裝相關軟件包
yum -y install ntppdate gcc git vim wget lrzsz
# 配置定時更新
*/5 * * * * /usr/sbin/ntpdate time.windows.com >/dev/null 2>&1
1.4 建立證書
# 安裝證書製做工具cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
2、開始安裝etcd集羣
2.1 製做etcd證書
# 建立對應目錄
mkdir /root/tls/etcd -p cd /root/tls/etcd
# 建立相關文件
cat <<EOF > etcd-root-ca-csr.json { "key": { "algo": "rsa", "size": 4096 }, "names": [ { "O": "etcd", "OU": "etcd Security", "L": "Beijing", "ST": "Beijing", "C": "CN" } ], "CN": "etcd-root-ca" } EOF cat <<EOF > etcd-gencert.json { "signing": { "default": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } EOF cat <<EOF > etcd-csr.json { "key": { "algo": "rsa", "size": 4096 }, "names": [ { "O": "etcd", "OU": "etcd Security", "L": "Beijing", "ST": "Beijing", "C": "CN" } ], "CN": "etcd", "hosts": [ "127.0.0.1", "localhost", "192.168.15.131", "192.168.15.132", "192.168.15.133", "192.168.15.134", "192.168.15.135" ] } EOF
# 生成證書
cfssl gencert --initca=true etcd-root-ca-csr.json | cfssljson --bare etcd-root-ca cfssl gencert --ca etcd-root-ca.pem --ca-key etcd-root-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd
2.2 安裝etcd服務
yum -y install etcd mkdir /etc/etcd/ssl cp /root/tls/etcd/{etcd.pem,etcd-key.pem,etcd-root-ca.pem} /etc/etcd/ssl/ chmod 755 -R /etc/etcd/ssl
2.3 建立etcd配置文件
cat <<EOF > /etc/etcd/etcd.conf # [member] ETCD_NAME=etcd01 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_WAL_DIR="/var/lib/etcd/wal" ETCD_SNAPSHOT_COUNT="10000" ETCD_HEARTBEAT_INTERVAL="100" ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="https://192.168.15.131:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.15.131:2379" ETCD_MAX_SNAPSHOTS="5" ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.15.131:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd01=https://192.168.15.131:2380,etcd02=https://192.168.15.132:2380,etcd03=https://192.168.15.133:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.15.131:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_STRICT_RECONFIG_CHECK="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # [security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem" ETCD_AUTO_TLS="true" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem" ETCD_PEER_AUTO_TLS="true" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" # #[profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" EOF
2.4 分發文件到其餘主機並啓動服務
scp -r /etc/etcd 192.168.15.132:/etc/ scp -r /etc/etcd 192.168.15.133:/etc/
提示:次配置文件須要根據本身的環境更改 ETCD_NAME 和 對應IP地址;
# 啓動服務
systemctl daemon-reload systemctl enable etcd systemctl start etcd
提示:若是是集羣環境,至少須要2臺以上的etcd同時啓動,不然會提示相關錯誤。多臺etcd直接將對應的證書文件、配置文件、啓動文件拷貝過去便可;
# 查看集羣狀態
export ETCDCTL_API=3 etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379 endpoint health
3、部署kubernetes master服務
3.1 生成kubernetes證書
# 建立對應目錄
mkdir /root/tls/k8s cd /root/tls/k8s/
# 建立相關文件
cat <<EOF > k8s-root-ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 4096 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cat <<EOF > k8s-gencert.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF cat <<EOF > kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.254.0.1", "192.168.15.131", "192.168.15.132", "192.168.15.133", "192.168.15.134", "192.168.15.135", "localhost", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cat <<EOF > kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cat <<EOF > admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
# 生成證書
cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca for targetName in kubernetes admin kube-proxy; do cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName done
3.2 二進制安裝kubernets
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.8/kubernetes-server-linux-amd64.tar.gz tar zxf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/
3.3 生產token及kubeconfig
# 生成token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
# 生成kubeconfig
## 生成kubeconfig export KUBE_APISERVER="https://127.0.0.1:6443" #### 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=k8s-root-ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig ### 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig ### 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig ### 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 生成kube-proxy kubeconfig 配置
### 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=k8s-root-ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig ### 設置客戶端認證參數 kubectl config set-credentials kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig ### 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig ### 設置默認上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
3.4 部署master服務
# 生成config通用文件
master 須要編輯 config、apiserver、controller-manager、scheduler這四個文件。
cat <<EOF > /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=2" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://127.0.0.1:8080" EOF
# 生成apiserver配置
cat <<EOF > /etc/kubernetes/apiserver # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--advertise-address=192.168.15.131 --insecure-bind-address=127.0.0.1 --bind-address=192.168.15.131" # The port on the local server to listen on. KUBE_API_PORT="--insecure-port=8080 --secure-port=6443" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--authorization-mode=RBAC \\ --runtime-config=rbac.authorization.k8s.io/v1beta1 \\ --anonymous-auth=false \\ --kubelet-https=true \\ --experimental-bootstrap-token-auth \\ --token-auth-file=/etc/kubernetes/ssl/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\ --etcd-quorum-read=true \\ --storage-backend=etcd3 \\ --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --enable-swagger-ui=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/kube-audit/audit.log \\ --event-ttl=1h" EOF
# 生成controller-manager配置
cat <<EOF > /etc/kubernetes/controller-manager # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \\ --service-cluster-ip-range=10.254.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \\ --leader-elect=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=5m0s" EOF
# 生成scheduler配置
cat <<EOF > /etc/kubernetes/scheduler ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0" EOF
# 編寫apiserver服務啓動腳本
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver User=root ExecStart=/usr/local/bin/kube-apiserver \\ \$KUBE_LOGTOSTDERR \\ \$KUBE_LOG_LEVEL \\ \$KUBE_ETCD_SERVERS \\ \$KUBE_API_ADDRESS \\ \$KUBE_API_PORT \\ \$KUBELET_PORT \\ \$KUBE_ALLOW_PRIV \\ \$KUBE_SERVICE_ADDRESSES \\ \$KUBE_ADMISSION_CONTROL \\ \$KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
# 編寫controller-manager服務啓動腳本
cat <<EOF > /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager User=root ExecStart=/usr/local/bin/kube-controller-manager \\ \$KUBE_LOGTOSTDERR \\ \$KUBE_LOG_LEVEL \\ \$KUBE_MASTER \\ \$KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
# 編寫kube-scheduler服務啓動腳本
cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler User=root ExecStart=/usr/local/bin/kube-scheduler \\ \$KUBE_LOGTOSTDERR \\ \$KUBE_LOG_LEVEL \\ \$KUBE_MASTER \\ \$KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
# 啓動對應服務
systemctl daemon-reload systemctl restart kube-apiserver systemctl restart kube-controller-manager systemctl restart kube-scheduler systemctl status kube-apiserver systemctl status kube-controller-manager systemctl status kube-scheduler systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler
3.5 部署 node 服務
# 建立相關目錄
mkdir -p /etc/kubernetes/ssl mkdir -p /var/lib/kubernetes cp kubernetes/server/bin/{kubelet,kubectl,kube-proxy} /usr/local/bin/
# 生成config通用文件
cat <<EOF > /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver # KUBE_MASTER="--master=http://127.0.0.1:8080" EOF
# 生成kubelet配置
cat <<EOF > /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.15.134" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.15.134" # location of the api-server # KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080" # Add your own! # KUBELET_ARGS="--cgroup-driver=systemd" KUBELET_ARGS="--cgroup-driver=cgroupfs \\ --cluster-dns=10.254.0.2 \\ --resolv-conf=/etc/resolv.conf \\ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --require-kubeconfig \\ --cert-dir=/etc/kubernetes/ssl \\ --cluster-domain=cluster.local. \\ --hairpin-mode promiscuous-bridge \\ --serialize-image-pulls=false \\ --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0" EOF
# 生成kube-proxy配置
cat <<EOF > /etc/kubernetes/proxy # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.15.134 \\ --hostname-override=k8s-node01 \\ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\ --cluster-cidr=10.254.0.0/16" EOF
提示:config 配置文件(包括下面的 kubelet、proxy)中所有未 定義 API Server 地址,由於 kubelet 和 kube-proxy 組件啓動時使用了 --require-kubeconfig 選項,該選項會使其從 *.kubeconfig 中讀取 API Server 地址,而忽略配置文件中設置的,因此配置文件中設置的地址實際上是無效的。
# 建立 ClusterRoleBinding
因爲 kubelet 採用了 TLS Bootstrapping,全部根絕 RBAC 控制策略,kubelet 使用的用戶 kubelet-bootstrap 是不具有任何訪問 API 權限的,這是須要預先在集羣內建立 ClusterRoleBinding 授予其 system:node-bootstrapper Role,在任意 master 執行便可,以下:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
# 建立node服務啓動腳本
cat << EOF > /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/bin/kubelet \\ \$KUBE_LOGTOSTDERR \\ \$KUBE_LOG_LEVEL \\ \$KUBELET_API_SERVER \\ \$KUBELET_ADDRESS \\ \$KUBELET_PORT \\ \$KUBELET_HOSTNAME \\ \$KUBE_ALLOW_PRIV \\ \$KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
# 建立kube-proxy服務啓動腳本
cat << EOF > /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/bin/kube-proxy \\ \$KUBE_LOGTOSTDERR \\ \$KUBE_LOG_LEVEL \\ \$KUBE_MASTER \\ \$KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
# 建立Nginx代理
建立nginx代理的目的就是實現apiserver高可用,這樣作的目的在於不須要維護前端的負載均衡,直接在node節點實現高可用。
mkdir -p /etc/nginx cat << EOF > /etc/nginx/nginx.conf error_log stderr notice; worker_processes auto; events { multi_accept on; use epoll; worker_connections 1024; } stream { upstream kube_apiserver { least_conn; server 192.168.15.131:6443; server 192.168.15.132:6443; server 192.168.15.133:6443; } server { listen 0.0.0.0:6443; proxy_pass kube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; } } EOF chmod +r /etc/nginx/nginx.conf
啓動nginx-proxy容器
docker run -it -d -p 127.0.0.1:6443:6443 -v /etc/localtime:/etc/localtime -v /etc/nginx:/etc/nginx --name nginx-proxy --net=host --restart=always --memory=512M nginx:1.13.3-alpine
# 啓動服務
systemctl daemon-reload systemctl start kubelet systemctl status kubelet systemctl enable kubelet
# 添加node到kubernetes集羣
因爲採用了 TLS Bootstrapping,因此 kubelet 啓動後不會當即加入集羣,而是進行證書申請,從日誌中能夠看到以下輸出:
Jul 19 14:15:31 docker4.node kubelet[18213]: I0719 14:15:31.810914 18213 feature_gate.go:144] feature gates: map[] Jul 19 14:15:31 docker4.node kubelet[18213]: I0719 14:15:31.811025 18213 bootstrap.go:58] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
此時只須要在 master 容許其證書申請便可,以下:
[root@localhost ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-_xILhfT4Z5FLQsz8csi3tJKLwz0q02U3aTI8MmoHgQg 24s kubelet-bootstrap Pending [root@localhost ~]# kubectl certificate approve node-csr-_xILhfT4Z5FLQsz8csi3tJKLwz0q02U3aTI8MmoHgQg [root@localhost ~]# kubectl get node NAME STATUS AGE VERSION 192.168.15.131 Ready 27s v1.7.3
# 最後啓動 kube-proxy 組件
systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy systemctl status kube-proxy
4、部署calico網絡
4.1 簡介
網路組件這裏採用 Calico,Calico 目前部署也相對比較簡單,只須要建立一下 yml 文件便可,具體可參考:https://docs.projectcalico.org/v2.3/getting-started/kubernetes/。部署calico須要知足如下條件:
calico 有多種安裝方式,選用哪一種須要根據安裝kubernetes的方式,具體以下:
4.2 安裝calico網絡
等等......開始以前,先檢查一下你的kubelet配置,是否啓用 --network-plugin=cni 參數,若是沒有趕忙加上。若是不加,你可能獲取的一直都是docker0 分配的網段。
wget http://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.15.131:2379,https://192.168.15.132:2379,https://192.168.15.133:2379\"@gi' calico.yaml export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'` export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'` export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'` sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml sed -i 's@192.168.0.0/16@10.254.64.0/18@gi' calico.yaml mkdir /data/kubernetes/calico -p mv calico.yaml /data/kubernetes/calico/
提示:大陸訪問gcr.io是訪問不到,能夠經過修改hosts文件實現,參考以下:
61.91.161.217 gcr.io 61.91.161.217 www.gcr.io 61.91.161.217 packages.cloud.google.com
# 啓動pod
kubectl create -f /data/kubernetes/calico/calico.yaml kubectl apply -f http://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml
提示:可能因爲網絡等緣由鏡像下載較慢,致使pod沒法正常啓動,建議先將鏡像下載到本地而後在啓動。
# 驗證網絡
cat << EOF > demo.deploy.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: demo-deployment spec: replicas: 3 template: metadata: labels: app: demo spec: containers: - name: demo image: mritd/demo ports: - containerPort: 80 EOF kubectl create -f demo.deploy.yml kubetcl get pods -o wide --all-namespaces
提示:kubectl exec到某個pod內ping另外一個不一樣node上的pod,此時每一個node節點上都應有不一樣pod IP的路由。具體過程,這裏就不作演示。
5、部署DNS
DNS 部署目前有兩種方式,一種是純手動,另外一種是使用 Addon-manager,目前我的感受 Addon-manager 有點繁瑣,因此如下采起純手動部署 DNS 組件。
5.1 部署DNS
DNS 組件相關文件位於 kubernetes addons 目錄下,把相關文件下載下來而後稍做修改便可;
# 建立相關目錄
mkdir /data/kubernetes/dns cd /data/kubernetes/dns
# 下載對應文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-cm.yaml wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-sa.yaml wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-svc.yaml.sed wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns/kubedns-controller.yaml.sed mv kubedns-controller.yaml.sed kubedns-controller.yaml mv kubedns-svc.yaml.sed kubedns-svc.yaml
# 修改配置
sed -i 's/$DNS_DOMAIN/cluster.local/gi' kubedns-controller.yaml sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kubedns-svc.yaml
提示:此 "DNS_SERVER_IP" 是你在 kubelet 配置文件中指定的地址,不是隨便寫的。
# 建立(我把全部 yml 放到的 dns 目錄中)
kubectl create -f ./data/kubernetes/dns
# 驗證
## 啓動一個nginx pod
cat > my-nginx.yaml << EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF kubectl create -f my-nginx.yaml
## export 該 Deployment, 生成 my-nginx 服務
kubectl expose deploy my-nginx [root@k8s-master01 ~]# kubectl get services --all-namespaces |grep my-nginx default my-nginx 10.254.127.14 <none> 80/TCP 2s
## 建立另外一個 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns 和 --cluster-domain,是否可以將服務 my-nginx 解析到上面顯示的 Cluster IP 10.254.127.14;
[root@k8s-master01 ~]# kubectl exec nginx -i -t -- /bin/bash root@nginx:/# cat /etc/resolv.conf nameserver 10.254.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. localhost options ndots:5 root@nginx:/# ping my-nginx PING my-nginx.default.svc.cluster.local (10.254.127.14): 48 data bytes ^C--- my-nginx.default.svc.cluster.local ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss root@nginx:/# ping kubernetes PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes ^C--- kubernetes.default.svc.cluster.local ping statistics --- 5 packets transmitted, 0 packets received, 100% packet loss root@nginx:/# ping kube-dns.kube-system.svc.cluster.local PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes ^C--- kube-dns.kube-system.svc.cluster.local ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss
以上均能正常解析出IP地址,代表DNS服務正常;
5.2 自動伸縮DNS服務
# 建立項目目錄
mkdir /data/kubernetes/dns-autoscaler cd /data/kubernetes/dns-autoscaler/
# 下載文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler-rbac.yaml wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.7/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
而後直接 kubectl create -f 便可,DNS 自動擴容計算公式爲 replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) ),若是想調整 DNS 數量(負載因子),只須要調整 ConfigMap 中對應參數便可,具體計算細節參考上面的官方文檔;
# 編輯 Config Map
kubectl edit cm kube-dns-autoscaler --namespace=kube-system
提示:整個集羣過程,可能存在鏡像沒法下載的狀況。那麼,你須要更鏡像地址或者將鏡像下載到本地。請注意鏡像下載的問題,國內推薦阿里雲容器鏡像以及使用阿里雲加速器下載dockerhub中鏡像;