(轉)基於TLS證書手動部署kubernetes集羣(下)

轉:https://www.cnblogs.com/wdliu/p/9152347.htmlhtml

1、master節點組件部署

承接上篇文章--基於TLS證書手動部署kubernetes集羣(上),咱們已經部署好了etcd集羣、flannel網絡以及每一個節點的docker,接下來部署master節點node

1.軟件包下載:linux

下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.mdnginx

2.解壓包、建立目錄git

複製代碼
# 解壓下載包
 tar zxvf kubernetes-server-linux-amd64.tar.gz

#建立目錄,ssl 以前已經建立ssl目錄可不用建立
mkdir -p /opt/kubernetes/{bin,conf,ssl}

#拷貝執行腳本
cp kube-controller-manager /opt/kubernetes/bin/
cp kube-apiserver  /opt/kubernetes/bin/
cp kube-scheduler /opt/kubernetes/bin/
cp kubectl /opt/kubernetes/bin/

#添加執行權限
chmod a+x /opt/kubernetes/bin/*
複製代碼

3.爲各個組件通信建立TLS Bootstrapping Tokengithub

複製代碼
#進入到配置文件目錄
cd /opt/kubernetes/conf/
#生成token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
#保存到文件中
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#查看token
cat token.csv
複製代碼

4.配置各個master組件web

kube-apiserverdocker

複製代碼
#配置文件
cat > /opt/kubernetes/conf/kube-apiserver <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.1.210.32:2379,https://10.1.210.33:2379,https://10.1.210.34:2379 \
--insecure-bind-address=127.0.0.1 \
--bind-address=10.1.210.33 \
--insecure-port=8080 \
--secure-port=6443 \
--advertise-address=10.1.210.33 \
--allow-privileged=true \
--service-cluster-ip-range=10.10.10.0/24 \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/conf/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/server.pem \
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

##服務器啓動文件
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
複製代碼

kube-schedulerbootstrap

複製代碼
#配置文件
cat > /opt/kubernetes/conf/kube-scheduler <<EOF
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
EOF

#啓動文件
cat  > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
複製代碼

kube-controller-managerapi

複製代碼
#配置文件
cat > cat /opt/kubernetes/conf/kube-controller-manager <<EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.10.10.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF

#啓動腳本
cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@master soft]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
複製代碼

5.啓動master全部組件

複製代碼
#啓動apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

#啓動kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

#啓動kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
複製代碼

6.查看各個組件狀態,kubectl get cs以下圖:

2、node節點組件部署

1.建立Node節點kubeconfig文件(此步驟在master上進行,建立完成下發到每一個node),此步驟依賴上次環境變量中生成的token,請確保echo $BOOTSTRAP_TOKEN有token值 

複製代碼
#進入到證書目錄
cd /opt/kubernetes/ssl/


# 建立指明api-server地址
export KUBE_APISERVER="https://10.1.210.33:6443"


# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
複製代碼

2.下發kubeconfig文件(bootstrap.kubeconfig、kube-proxy.kubeconfig)

#下發node節點配置文件
scp *.kubeconfig node1:/opt/kubernetes/conf/
scp *.kubeconfig node2:/opt/kubernetes/conf/

3.選擇一臺node節點部署組件(下載server版本中已經有node組件)

爲了方便,下面使用腳本生成配置文件和啓動腳本:

kubelet組件

參數一:kubelet組件監聽地址

參數二:dns,後續部署集羣dns的地址

sh kubelet.sh 10.1.210.32 10.10.10.3
#!/bin/bash

NODE_ADDRESS=${1:-"10.1.210.32"}
DNS_SERVER_IP=${2:-"10.10.10.3"}

cat <<EOF >/opt/kubernetes/conf/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
kubelet.sh

kube-proxy組件

參數一:kube-proxy 監聽地址

sh proxy.sh 10.1.210.32
#!/bin/bash

NODE_ADDRESS=${1:-"10.1.210.32"}

cat <<EOF >/opt/kubernetes/conf/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
proxy.sh

4.因爲咱們採用了RBAC受權機制,因此須要給kubelet組件受權(賦權操做在master上進行)

複製代碼
#建立角色並賦權可使用kubectl create clusterrolebinding --help查看如何建立角色

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

#重啓kubelet和kube-proxy
systemctl restart kubelet

systemctl restart kube-proxy
複製代碼

5.此時到mater查看(kubectl get csr)證書請求信息,是否有node請求集羣證書,以下:

6.此時咱們須要運行該節點請求證書文件

##使用kubectl certificate --help查看幫助
kubectl certificate approve node-csr-urT-yh6bTjMi_-XXaRSdzPTWRuAULBjuaP85RU7_v8U

7.查看節點是否加入,若是節點狀態是Ready表明該節點已經加入到集羣。

8.在另外一個節點也作該操做,固然你也能夠直接拷貝配置文件,修改配置信息,而後將寧一個節點加入到集羣中,如圖:

9.測試集羣可用

#建立nginx pod
kubectl run nginx --image=nginx --replicas=2
#查看pod
kubectl get pod

 

3、部署Dashboard

dashbord是k8s自帶的一個webUI,能夠查看一些基本信息,對咱們瞭解集羣狀態有很大的幫助。

1.爲了規範,咱們將全部的yaml文件統一放在/opt/kubernetes/yaml下,在建立dasnbord以前須要建立角色。

kubectl create -f dashboard-rbac.yaml

dashboard-rbac.yaml

複製代碼
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system
複製代碼

2.爲dashboard建立控制器,須要注意的是,將鏡像改成阿里的源,否則會去google找鏡像,致使下載失敗。

kubectl create -f dashboard-deployment.yaml

dashboard-deployment.yaml

複製代碼
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.1
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
          protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
複製代碼

3.建立service用於暴露服務

kubectl create -f dashboard-service.yaml

dashboard-service.yaml

複製代碼
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
複製代碼

4.查看狀態

複製代碼
#查看sevice
kubectl get svc -n kube-system

#查看pod
kubectl get pods -n kube-system

#查看全部信息
kubectl get all -n kube-system
複製代碼

5.根據以上信息80:18158,咱們使用nodeip訪問http://10.1.210.34:38158/查看儀表盤,到此,集羣部署完畢。

相關文章
相關標籤/搜索