前面咱們安裝了一個簡單的kubernetes集羣,選用了1個master節點和三個node節點。etcd也沒有安裝成集羣.
此次咱們安裝一個3個master節點+etcd集羣的kubernetes集羣.node
本次選用三個master節點,三個node節點來安裝k8s集羣。
etcd集羣安裝在master節點上.
並準備一個虛擬ip來作keepalived。 linux
節點 | IP |
---|---|
M0 | 10.xx.xx.xx |
M1 | 10.xx.xx.xx |
M2 | 10.xx.xx.xx |
N0 | 10.xx.xx.xx |
N1 | 10.xx.xx.xx |
N2 | 10.xx.xx.xx |
virtual_ipaddress: 10.xx.xx.xxgit
包括修改主機名,關閉防火牆等操做。
k8s集羣會識別主機名字,確保每一個主機名設爲不一樣值。
關閉防火牆是爲了不沒必要要的網絡問題。 github
# ${hostname}變量請替換成規劃的主機名,好比M0, N0, N1 sudo hostnamectl set-hostname ${hostname} systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i -re '/^\s*SELINUX=/s/^/#/' -e '$i\\SELINUX=disabled' /etc/selinux/config
創建ssh的互信,方便後面傳文件什麼的。能夠使用ssh-copy-id
命令快速創建,也能夠本身手動創建。這個網上教程不少,本身搜一下docker
yum install docker -y systemctl enable docker && systemctl start docker
修改docker的log driver爲json-file
,這個不影響安裝,只是爲了後期安裝efk日誌收集系統方便。 docker info
能夠查看當前log driver,centos7默認使用journald.
不一樣版本的docker可能修改方式不同,最新官網文檔是修改/etc/docker/daemon.json
文件,我安裝的版本是1.12.6,按以下方式修改。json
vim /etc/sysconfig/docker # 修改成以下,而後重啓docker OPTIONS='--selinux-enabled --log-driver=json-file --signature-verification=false' systemctl restart docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl
官網文檔上寫一些用戶在RHEL/Centos7系統上安裝時,因爲iptables被繞過致使路由錯誤,須要在
sysctl的config文件中將net.bridge.bridge-nf-call-iptables設置爲1.vim
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
啓動kubelet:centos
systemctl enable kubelet && systemctl start kubelet
至此,準備工做就作好了。目前每隔幾秒kubelet就會重啓,直到收到kubeadm的命令。
因此用systemctl status kubelet
看到kubelet沒有啓動是正常現象,能夠多執行幾回查看,就會發現kubelet處於不斷中止和重啓的狀態.api
安裝cfssl
和sfssljson
網絡
curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x /usr/local/bin/cfssl*
ssh到etcd0節點(我這裏規劃的是master0節點),執行下面命令
執行完成能夠看到/etc/kubernetes/pki/etcd
文件夾下生成了ca-config.json和ca-csr.json兩個文件
mkdir -p /etc/kubernetes/pki/etcd cd /etc/kubernetes/pki/etcd cat >ca-config.json <<EOF { "signing": { "default": { "expiry": "43800h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat >ca-csr.json <<EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 } } EOF
生成ca證書
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
在etcd0節點執行如下操做,會生成兩個文件client.pem, client-key.pem
cat >client.json <<EOF { "CN": "client", "key": { "algo": "ecdsa", "size": 256 } } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
設置PEER_NAME和PRIVATE_IP環境變量(在每臺etcd機器上執行)
# 注意下面ens192是你實際網卡的名字,有多是eth1之類的。用ip addr查看。 export PEER_NAME=$(hostname) export PRIVATE_IP=$(ip addr show ens192 | grep -Po 'inet \K[\d.]+')
將剛剛在etcd上生成的CA拷貝到另外兩臺etcd機器上(在兩臺etch peers上執行)。
這裏須要ssh信任權限,這個在上面已經讓你創建好了。
mkdir -p /etc/kubernetes/pki/etcd cd /etc/kubernetes/pki/etcd scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca.pem . scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-key.pem . scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client.pem . scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client-key.pem . scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-config.json .
在全部etcd機器上執行下面命令,生成peer.pem, peer-key.pem, server.pem, server-key.pem
cfssl print-defaults csr > config.json sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json sed -i 's/example\.net/'"$PEER_NAME"'/' config.json cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer
這裏有兩種方式:在虛擬機上直接運行或在k8s上運行static pods.我這裏選用第一種,在虛擬機上直接運行.
安裝etcd
cd /tmp export ETCD_VERSION=v3.1.10 curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/ rm -rf etcd-$ETCD_VERSION-linux-amd64*
生成etcd的環境文件,後面將會用到
touch /etc/etcd.env echo "PEER_NAME=$PEER_NAME" >> /etc/etcd.env echo "PRIVATE_IP=$PRIVATE_IP" >> /etc/etcd.env
建立etcd服務systemd的配置文件
注意修改下面<etcd0-ip-address>等變量爲虛擬機的真實ip地址。m0, m1等爲etcd的名字
cat >/etc/systemd/system/etcd.service <<EOF [Unit] Description=etcd Documentation=https://github.com/coreos/etcd Conflicts=etcd.service Conflicts=etcd2.service [Service] EnvironmentFile=/etc/etcd.env Type=notify Restart=always RestartSec=5s LimitNOFILE=40000 TimeoutStartSec=0 ExecStart=/usr/local/bin/etcd --name ${PEER_NAME} \ --data-dir /var/lib/etcd \ --listen-client-urls https://${PRIVATE_IP}:2379 \ --advertise-client-urls https://${PRIVATE_IP}:2379 \ --listen-peer-urls https://${PRIVATE_IP}:2380 \ --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \ --cert-file=/etc/kubernetes/pki/etcd/server.pem \ --key-file=/etc/kubernetes/pki/etcd/server-key.pem \ --client-cert-auth \ --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ --peer-cert-file=/etc/kubernetes/pki/etcd/peer.pem \ --peer-key-file=/etc/kubernetes/pki/etcd/peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ --initial-cluster m0=https://<etcd0-ip-address>:2380,m1=https://<etcd1-ip-address>:2380,m2=https://<etcd2-ip-address>:2380 \ --initial-cluster-token my-etcd-token \ --initial-cluster-state new [Install] WantedBy=multi-user.target EOF
啓動etcd集羣
systemctl daemon-reload systemctl start etcd
安裝keepalived
yum install keepalived -y
修改配置文件
! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state <STATE> interface <INTERFACE> virtual_router_id 51 priority <PRIORITY> authentication { auth_type PASS auth_pass 4be37dc3b4c90194d1600c483e10ad1d } virtual_ipaddress { <VIRTUAL-IP> } track_script { check_apiserver } }
健康檢測腳本
將下面的<VIRTUAL-IP>替換成準備的虛擬ip
#!/bin/sh errorExit() { echo "*** $*" 1>&2 exit 1 } curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/" if ip addr | grep -q <VIRTUAL-IP>; then curl --silent --max-time 2 --insecure https://<VIRTUAL-IP>:6443/ -o /dev/null || errorExit "Error GET https://<VIRTUAL-IP>:6443/" fi
啓動keepalived
systemctl start keepalived
生成配置文件:
sysctl net.bridge.bridge-nf-call-iptables=1
cat >config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: <private-ip> etcd: endpoints: - https://<etcd0-ip-address>:2379 - https://<etcd1-ip-address>:2379 - https://<etcd2-ip-address>:2379 caFile: /etc/kubernetes/pki/etcd/ca.pem certFile: /etc/kubernetes/pki/etcd/client.pem keyFile: /etc/kubernetes/pki/etcd/client-key.pem networking: podSubnet: <podCIDR> apiServerCertSANs: - <load-balancer-ip> apiServerExtraArgs: apiserver-count: "3" EOF
運行kubeadm
kubeadm init --config=config.yaml
將剛剛master0生成的文件copy到master1和master2機器
scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.key /etc/kubernetes/pki scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.key /etc/kubernetes/pki scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.pub /etc/kubernetes/pki scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki scp -r root@<master0-ip-address>:/etc/kubernetes/pki/etcd /etc/kubernetes/pki
這裏跟上面<podCIDR>那裏設置的要對應起來。我這裏選用的是Flannel,執行下面命令。
官網詳解Installing a pod network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
在每臺node機器上執行如下格式的命令,在master節點執行完kubeadm init後會生成下面命令,複製執行就好。
這裏統一將node加入到master0管理中。
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
完了能夠使用kubectl get nodes
查看集羣是否安裝完成。