kubernetes集羣安裝指南:kube-scheduler組件集羣部署

kube-scheduler爲master節點組件。kube-scheduler集羣包含 3 個節點,啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的高可用性。node

1 安裝準備

特別說明:這裏全部的操做都是在devops這臺機器上經過ansible工具執行;kube-scheduler 在以下兩種狀況下使用該證書:linux

  • 與 kube-apiserver 的安全端口通訊使用;
  • 在安全端口(https,10259) 輸出 prometheus 格式的 metrics;

1.1 環境變量定義

#################### Variable parameter setting ######################
KUBE_NAME=kube-scheduler
K8S_INSTALL_PATH=/data/apps/k8s/kubernetes
K8S_BIN_PATH=${K8S_INSTALL_PATH}/sbin
K8S_LOG_DIR=${K8S_INSTALL_PATH}/logs
K8S_CONF_PATH=/etc/k8s/kubernetes
KUBE_CONFIG_PATH=/etc/k8s/kubeconfig
CA_DIR=/etc/k8s/ssl
SOFTWARE=/root/software
VERSION=v1.14.2
PACKAGE="kubernetes-server-${VERSION}-linux-amd64.tar.gz"
DOWNLOAD_URL=「」https://github.com/devops-apps/download/raw/master/kubernetes/${PACKAGE}"
ETH_INTERFACE=eth1
LISTEN_IP=$(ifconfig | grep -A 1 ${ETH_INTERFACE} |grep inet |awk '{print $2}')
USER=k8s

1.2 下載和分發 kubernetes 二進制文件

訪問kubernetes github 官方地址下載穩定的 realease 包至本機;git

wget  $DOWNLOAD_URL -P $SOFTWARE

將kubernetes 軟件包分發到各個master節點服務器;github

sudo ansible master_k8s_vgs -m copy -a "src=${SOFTWARE}/$PACKAGE dest=${SOFTWARE}/" -b

2 部署kube-scheduler集羣

2.1 安裝kube-scheduler二進制文件

### 1.Check if the install directory exists.
if [ ! -d "$K8S_BIN_PATH" ]; then
     mkdir -p $K8S_BIN_PATH
fi

if [ ! -d "$K8S_LOG_DIR/$KUBE_NAME" ]; then
     mkdir -p $K8S_LOG_DIR/$KUBE_NAME
fi

if [ ! -d "$K8S_CONF_PATH" ]; then
     mkdir -p $K8S_CONF_PATH
fi

if [ ! -d "$KUBE_CONFIG_PATH" ]; then
     mkdir -p $KUBE_CONFIG_PATH
fi

### 2.Install kube-apiserver binary of kubernetes.
if [ ! -f "$SOFTWARE/kubernetes-server-${VERSION}-linux-amd64.tar.gz" ]; then
     wget $DOWNLOAD_URL -P $SOFTWARE >>/tmp/install.log  2>&1
fi
cd $SOFTWARE && tar -xzf kubernetes-server-${VERSION}-linux-amd64.tar.gz -C ./
cp -fp kubernetes/server/bin/$KUBE_NAME $K8S_BIN_PATH
ln -sf  $K8S_BIN_PATH/$KUBE_NAM /usr/local/bin
chown -R $USER:$USER $K8S_INSTALL_PATH
chmod -R 755 $K8S_INSTALL_PATH

2.2 分發kubeconfig文件和證書文件

分發證書
cd ${CA_DIR}
sudo ansible master_k8s_vgs -m  copy -a "src=kube-scheduler.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m  copy -a "src=kube-scheduler-key.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m  copy -a "src=ca.pem dest=${CA_DIR}/" -b
sudo ansible master_k8s_vgs -m  copy -a "src=ca-key.pem dest=${CA_DIR}/" -b
分發kubeconfig認證文件

kube-scheduler使用 kubeconfig文件鏈接訪問 apiserver服務,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler證書:shell

cd $KUBE_CONFIG_PATH
sudo ansible master_k8s_vgs -m  copy -a \
     "src=kube-scheduler.kubeconfig dest=$KUBE_CONFIG_PATH/" -b

備註: 若是在前面小節已經同步過各組件kubeconfig和證書文件,此處能夠沒必要執行此操做;api

2.3 建立kube-scheduler配置文件

cat >${K8S_CONF_PATH}/kube-scheduler.yaml<<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
  leaderElect: true
metricsBindAddress: 127.0.0.1:10251
EOF
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它鏈接和驗證 kube-apiserver;
  • --leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;
  • kubernetes 以後都會以配置文件的形式設置相應的參數;

2.4 建立kube-scheduler 啓動服務

cat >/usr/lib/systemd/system/${KUBE_NAME}.service<<EOF
[Unit]
Description=Kubernetes kube-scheduler Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
User=${USER}
WorkingDirectory=${K8S_INSTALL_PATH}
ExecStart=${K8S_BIN_PATH}/${KUBE_NAME} \\
  --config=/etc/k8s/kubernetes/kube-scheduler.yaml \\
  --bind-address=${LISTEN_IP} \\
  --secure-port=10259 \\
  --tls-cert-file=${CA_DIR}/kube-scheduler.pem \\
  --tls-private-key-file=${CA_DIR}/kube-scheduler-key.pem \\
  --kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --authentication-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --authorization-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --client-ca-file=${CA_DIR}/ca.pem \\
  --requestheader-allowed-names="" \\
  --requestheader-client-ca-file=${CA_DIR}/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --leader-elect=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=${K8S_LOG_DIR}/${KUBE_NAME} \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

2.5 檢查服務運行狀態

systemctl status kube-scheduler|grep Active

確保狀態爲 active (running),不然查看日誌,確認緣由:安全

sudo journalctl -u kube-scheduler

2.6 查看輸出的 metrics

注意:如下命令在 kube-scheduler 節點上執行。kube-scheduler 監聽 10251 和 10251 端口:兩個接口都對外提供 /metrics 和 /healthz 的訪問。服務器

  • 10251:接收 http 請求,非安全端口,不須要認證受權,爲了安全建議偵聽地址爲127.0.0.1;
  • 10259:接收 https 請求,安全端口,須要認證受權,能夠偵放任何地址;
sudo netstat -ntlp | grep kube-sc
tcp   0      0 127.0.0.1:10251         0.0.0.0:*        LISTEN      28786/kube-schedule 
tcp   0      0 10.10.10.22:10259       0.0.0.0:*        LISTEN      28786/kube-schedule

注意:不少安裝文檔都是關閉了非安全端口,將安全端口改成默認的非安全端口數值,這會致使查看集羣狀態是報下面所示的錯誤,執行 kubectl get cs命令時,apiserver 默認向 127.0.0.1 發送請求。當controller-manager、scheduler以集羣模式運行時,有可能和kube-apiserver不在一臺機器上,且訪問方式爲https,則 controller-manager或scheduler 的狀態爲 Unhealthy,但實際上它們工做正常。則會致使上述error,但實際集羣是安全狀態;app

kubectl get componentstatuses
NAME                 STATUS      MESSAGE    ERROR
controller-manager  Unhealthy  dial tcp  127.0.0.1:10252: connect: connection refused
scheduler          Unhealthy  dial tcp  127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}
etcd-2               Healthy     {"health":"true"}
etcd-1               Healthy     {"health":"true"}

正常輸出應該爲:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

2.7 查看當前的 leader

kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

2.8 測試 kube-scheduler 集羣的高可用

隨機找一個或兩個 master 節點,停掉 kube-scheduler 服務,看其它節點是否獲取了 leader 權限.tcp


kube-scheduler部署完後,整個kubernetes集羣master節點部署完成,後面還須要要部署node節點相關主機,關於kube-scheduler腳本請從此處獲取;

相關文章
相關標籤/搜索