深刻玩轉K8S之手動部署KubernetesV1.11版本及常見問題解答

在開始以前呢,咱們回顧下以前學過的知識點:node

最開始經過Kubeadm靜默黑盒(自動)來安裝,爲何這麼說呢由於咱們是經過Kubeadm自動安裝的,並不知道作了那些具體的操做。這也是爲何寫這篇手動部署的緣由,是爲了讓你們更好的瞭解下和體驗下二者區別以及部署流程linux

接着咱們學習瞭如何經過Dashboard訪問以及一些重要的知識技能點的應用好比:Label標籤DaemonSet調度神器應用狀態檢測nginx

還有一些更爲接近實際應用的操做好比:最簡易的外網訪問(適用於新手快速體驗)高級的外網訪問nginx-ingresstraefik-ingress(實際場景應用)以及咱們的業務彈性伸縮和滾動升級git

最後學習了存儲資源管理外掛配置管理ConfigMap,這些都在實際應用場景中很是實用。那OK,咱們今天來學習下手動搭建Kubernetes,緣由剛纔也說了,爲了讓你們更好的瞭解下K8S,可能會有人問,學着學着怎麼倒回來了,在這裏向你們道個歉,由於以前沒有很好的梳理,以致於遺忘了手動部署,閒話很少說,咱們下面來看看怎麼部署。github

環境描述:web

採用CentOS7.4 minimual,docker 17.03-ce, etcd 3.1, k8s 1.11docker

咱們這裏選用三個節點搭建一個實驗環境。json

10.0.100.202 k8s-masterbootstrap

10.0.100.203 k8s-node1centos

10.0.100.204 k8s-node2

 

準備環境:(下面6條在全部節點操做)

1.配置好各節點hosts文件

2.關閉各節點系統防火牆

3.關閉各節點SElinux

4.關閉各節點swap (註釋/etc/fstab文件裏swap相關的行)

5.配置各節點系統內核參數使流過網橋的流量也進入iptables/netfilter框架中,在/etc/sysctl.conf中添加如下配置:

cat <<EOF > /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
vm.swappiness=0 
EOF

sysctl --system

6.配置所需的YUM源

yum -y install epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 wget
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum –y install  --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos
systemctl enable docker && systemctl restart docker

OK到這裏準備環境就作好了,下面咱們來建立部署集羣時所需的TLS證書以及密鑰

kubernetes 系統的各組件須要使用 TLS 證書對通訊進行加密,本文使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 和其它證書;

注意:如下操做都在 master 節點即10.0.100.202這臺主機上執行,證書只須要建立一次便可,之後在向集羣中添加新節點時只要將 /etc/kubernetes/ 目錄下的證書拷貝到新節點上便可。

Master節點

1.安裝CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
 
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
 
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
 
export PATH=/usr/local/bin:$PATH

2.配置CA

mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根據config.json文件的格式建立以下的ca-config.json文件
# 過時時間設置成了 87600h
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

3.建立CA證書籤名請求,即建立ca-csr.json 文件,內容以下:

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ],
    "ca": {
       "expiry": "87600h"
    }
}

4. 生成 CA 證書和私鑰

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

5.建立Kubernetes證書籤名請求文件 kubernetes-csr.json,注意記得替換相應ip

{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.0.100.202",
      "10.0.100.203",
      "10.0.100.204",
      "10.254.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

6. 生成 kubernetes 證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kubernetes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

7.建立admin證書籤名請求文件 admin-csr.json

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

8. 生成 admin 證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

9. 建立 kube-proxy 證書籤名請求文件 kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

10. 生成 kube-proxy 客戶端證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

11.分發證書,將生成的證書和祕鑰文件(後綴名爲.pem)拷貝到全部機器的 /etc/kubernetes/ssl 目錄下備用;

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
cd /etc/kubernetes
scp ./ssl/* 10.0.100.203:/etc/kubernetes/ssl/
scp ./ssl/* 10.0.100.204:/etc/kubernetes/ssl/

12.安裝Kubectl命令行工具

wget https://dl.k8s.io/v1.11.0/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kube* /usr/bin/
chmod a+x /usr/bin/kube*

13.建立 kubectl kubeconfig 文件

export KUBE_APISERVER="https://10.0.100.202:6443"
# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER}
# 設置客戶端認證參數
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true \
  --client-key=/etc/kubernetes/ssl/admin-key.pem
# 設置上下文參數
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin
# 設置默認上下文
kubectl config use-context kubernetes

14. 建立 TLS Bootstrapping Token

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
cp token.csv /etc/kubernetes/

15. 建立 kubelet bootstrapping kubeconfig 文件

cd /etc/kubernetes
export KUBE_APISERVER="https://10.0.100.202:6443"
 
# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

16. 建立 kube-proxy kubeconfig 文件

export KUBE_APISERVER="https://10.0.100.202:6443"
# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
# 設置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

17.分發 kubeconfig 文件,將兩個 kubeconfig 文件分發到全部 Node 機器的 /etc/kubernetes/ 目錄

cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/
scp ./bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.100.203:/etc/kubernetes/
scp ./bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.100.204:/etc/kubernetes/

OK,到這裏建立證書以及密鑰就高一段落了,相信有不少人都有所迷惑,由於剛纔建立了好多密鑰和證書,下面我來總結下:

生成的 CA 證書和祕鑰文件以下:

ca-key.pem
ca.pem
kubernetes-key.pem
kubernetes.pem
kube-proxy.pem
kube-proxy-key.pem
admin.pem
admin-key.pem

使用證書的組件以下:

etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
kubelet:使用 ca.pem;
kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
kubectl:使用 ca.pem、admin-key.pem、admin.pem;
kube-controller-manager:使用 ca-key.pem、ca.pem

相信看完上面的總結就一目瞭然了,OK下面咱們來進行etcd集羣的安裝。


全部節點部署etcd

Kuberntes 使用 etcd 來存儲全部數據,下面咱們來建立三節點etcd集羣,也就是master、node一、node2前面咱們已經建立了不少TLS證書,我們這裏就複用下kubernetes的證書,如下操做在全部節點執行。

 

1.下載etcd源碼文件

wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
tar -xvf etcd-v3.1.5-linux-amd64.tar.gz
mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin

2.建立 etcd 的 systemd unit 文件,在/usr/lib/systemd/system/目錄下建立文件etcd.service,內容以下。注意替換IP地址爲你本身的etcd集羣的主機IP。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
 
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
  --name ${ETCD_NAME} \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
  --initial-cluster k8s-master=https://10.0.100.202:2380,k8s-node1=https://10.0.100.203:2380,k8s-node2=https://10.0.100.204:2380 \
  --initial-cluster-state new \
  --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

注意:etcd 的數據目錄爲 /var/lib/etcd,需在啓動服務前建立這個目錄,不然啓動服務的時候會報錯「Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory」;


3. 環境變量配置文件/etc/etcd/etcd.conf

#[member]
ETCD_NAME=k8s-master
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.100.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.100.202:2379"
 
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.100.202:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS=https://10.0.100.202:2379

注意:這是10.0.100.202節點的配置,其餘兩個etcd節點只要將上面的IP地址改爲相應節點的IP地址便可。ETCD_NAME換成對應節點的k8s-node一、k8s-node2。


4. 啓動etcd服務

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

# 在全部的 kubernetes節點重複上面的步驟,直到全部機器的 etcd 服務都已啓動。

5. 驗證服務

etcdctl   --ca-file=/etc/kubernetes/ssl/ca.pem   --cert-file=/etc/kubernetes/ssl/kubernetes.pem   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem   cluster-health
2018-08-14 02:16:44.081321 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-08-14 02:16:44.084285 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 109271147228d387 is healthy: got healthy result from https://10.0.100.203:2379
member 298a4447067ff8b8 is healthy: got healthy result from https://10.0.100.204:2379
member 5bc4c443d246701d is healthy: got healthy result from https://10.0.100.202:2379
cluster is healthy

結果最後一行爲 cluster is healthy 時表示集羣服務正常。


Master節點

接着剛纔的Master節點的操做來,剛纔穿插了下etcd的部署,下面來部署Master所需的服務:kube-apiserver、kube-scheduler、kube-controller-manager

 

1.下載Kubernetes V1.11的源碼包

wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

2.建立 kube-apiserver的service配置文件,/usr/lib/systemd/system/kube-apiserver.service內容:

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target


/etc/kubernetes/config文件的內容爲:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBE_MASTER="--master= 

#該配置文件同時被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。

3. apiserver配置文件/etc/kubernetes/apiserver內容爲:

KUBE_API_ADDRESS="--advertise-address=10.0.100.202 --bind-address=10.0.100.202 --insecure-bind-address=10.0.100.202"
KUBE_ETCD_SERVERS="--etcd-servers=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

4. 啓動kube-apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

5. 建立 kube-controller-manager的serivce配置文件/usr/lib/systemd/system/kube-controller-manager.service:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

6. 配置文件/etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

7. 啓動 kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

8. 建立 kube-scheduler的serivce配置文件/usr/lib/systemd/system/kube-scheduler.service:

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

9. 配置文件/etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

10. 啓動 kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

11.驗證Master節點功能

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}


全部節點部署Flannel

下面咱們來安裝Flannel網絡插件,全部的node節點都須要安裝網絡插件才能讓全部的Pod加入到同一個局域網中,因此下面的操做在全部節點都須要執行一遍。建議直接使用yum安裝flanneld,除非對版本有特殊需求,默認安裝的是0.7.1版本的flannel。


1.安裝flannel

yum install -y flannel

2.修改service文件/usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
WantedBy=docker.service

3.修改/etc/sysconfig/flanneld配置文件:

# Flanneld configuration options  
 
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379"
 
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
 
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

若是是多網卡(例如vagrant環境),則須要在FLANNEL_OPTIONS中增長指定的外網出口的網卡,例如iface=eth2


4. 在etcd中建立網絡配置(這裏只在master節點操做一次就行)

etcdctl --endpoints=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kube-centos/network
 
etcdctl --endpoints=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379  \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kube-centos/network/config '{"Network":"10.30.0.0/16","SubnetLen":24,"Backend":{"Type」:」host-gw「}}’

若是你要使用vxlan模式,能夠直接將host-gw改爲vxlan便可。


5. 啓動flannel

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flannel


部署node節點

OK,到此爲止咱們已經完成了Master節點服務、etcd集羣、flannel集羣都已經搭建完成,下面咱們來看看node節點的服務搭建。首先須要確認下node節點的flannel、docker、etcd是否啓動,其次檢查下/etc/kubernetes/下的證書和配置文件是否在,具體操做這裏就再也不贅述了。

 

1.修改docker配置使其可使用flannel網絡

使用systemctl命令啓動flanneld後,會自動執行./mk-docker-opts.sh -i在/run/flannel/目錄下生成以下兩個文件環境變量文件:

ls /run/flannel/
docker  subnet.env

Docker將會讀取這兩個環境變量文件做爲容器啓動參數,修改docker的配置文件/usr/lib/systemd/system/docker.service,增長一條環境變量配置:

EnvironmentFile=-/run/flannel/docker

爲了不一會重啓kubelet的時候會出現error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"報錯,咱們如今就增長一條配置:ExecStart中的--exec-opt native.cgroupdriver=systemd,那麼爲何會出現這個問題呢,這是由於kubelet與docker的cgroup driver不一致致使的,kubelet啓動的時候有個—cgroup-driver參數能夠指定爲"cgroupfs"或者「systemd」。


2.在安裝以前先去master節點生成kubelet所需的權限角色:

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes

注意:兩個角色缺一不可,不然就會出現這樣的報錯cannot list pods at the cluster scope


3.下面咱們來安裝配置kubelet

wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/

4. 建立kubelet的service配置文件/usr/lib/systemd/system/kubelet.service:

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

5.建立kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改成你的每臺node節點的IP地址

KUBELET_ADDRESS="--address=10.0.100.203"
KUBELET_HOSTNAME="--hostname-override=10.0.100.203"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"

6. 啓動kublet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

7. 去Master節點經過kublet的TLS證書請求

[root@k8s-master ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-qgKV6Z_YCV5Zwt0erq2sdtEK8V1z_7Opa5C2JtSW54I   3s        kubelet-bootstrap   Pending
[root@k8s-master ~]# kubectl certificate approve node-csr-qgKV6Z_YCV5Zwt0erq2sdtEK8V1z_7Opa5C2JtSW54I
[root@k8s-master ~]# kubectl get no
NAME           STATUS    ROLES     AGE       VERSION
10.0.100.203   Ready     <none>    10s       v1.11.0

8.安裝conntrack

yum install -y conntrack-tools

9. 建立 kube-proxy 的service配置文件/usr/lib/systemd/system/kube-proxy.service:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

10. kube-proxy配置文件/etc/kubernetes/proxy

KUBE_PROXY_ARGS="--bind-address=10.0.100.203 --hostname-override=10.0.100.203 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

11.啓動proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

到此咱們K8S集羣就手動搭建完畢了,最後咱們來啓動個demo來測試下

$ kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx  --port=80
deployment "nginx" created
$ kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed
$ kubectl describe svc example-service
Name:                     example-service
Namespace:                default
Labels:                   run=load-balancer-example
Annotations:              <none>
Selector:                 run=load-balancer-example
Type:                     NodePort
IP:                       10.254.102.2
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30460/TCP
Endpoints:                172.17.0.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

能夠看到咱們採用都是最原始的nodeport方式來訪問的,端口30460,訪問集羣節點任意一個ip均可以看到頁面

博客01.png

OK,到這裏咱們的手動搭建就告一段落了,後續有時間也會寫一些實戰的東西出來,請期待。

本文參考了jimsong的博客:

https://jimmysong.io/kubernetes-handbook/practice/


個人博客即將同步至騰訊雲+社區,邀請你們一同入駐:https://cloud.tencent.com/developer/support-plan?invite_code=tcwqxy4yt70z

相關文章
相關標籤/搜索