k8s1.13.0二進制部署-ETCD集羣(一)

 Kubernetes集羣中主要存在兩種類型的節點:master、minion節點。node

Minion節點爲運行 Docker容器的節點,負責和節點上運行的 Docker 進行交互,而且提供了代理功能。
Master節點負責對外提供一系列管理集羣的API接口,而且經過和 Minion 節點交互來實現對集羣的操做管理。
linux

kubernetes必備組件

kube-apiserver:集羣的統一入口,各組件協調者,以RESTful API提供接口服務,全部對象資源的增刪改查和監聽操做都交給APIServer處理後再提交給Etcd存儲。
kube-controller-manager:處理集羣中常規後臺任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。 kube-scheduler:根據調度算法爲新建立的Pod選擇一個Node節點,能夠任意部署,能夠部署在同一個節點上,也能夠部署在不一樣的節點上。 etcd:分佈式鍵值存儲系統。用於保存集羣狀態數據,好比Pod、Service等對象信息。 kubelet:Master在Node節點上的Agent,管理本機運行容器的生命週期,好比建立容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工做。kubelet將每一個Pod轉換成一組容器。 kube-proxy:在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工做。 docker或rocket:容器引擎,運行容器。 Pod網絡:Pod要可以相互間通訊,K8S集羣必須部署Pod網絡,flannel是其中一種的可選方案,是CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具。

kubernetes集羣架構與組建:

 

準備部署環境:

vip            192.168.0.130    keepalived    
k8s-master1    192.168.0.123    kube-apiserver,kube-controller-manager,kube-scheduler,etcd k8s-master2 192.168.0.124 kube-apiserver,kube-controller-manager,kube-scheduler k8s-node01 192.168.0.125 kubelet,kube-proxy,docker,flannel,etcd k8s-node02 192.168.0.126 kubelet,kube-proxy,docker,flannel,etcd

系統環境初始化:

#系統更新
yum install -y epel-release; yum update -y

#修改個節點主機名 hostnamectl set-hostname k8s-master1 hostnamectl set-hostname k8s-master2 hostnamectl set-hostname k8s-node1 hostnamectl set-hostname k8s-node2 #master1 IP [root@k8s-master1 ~]# ifconfig ens32 ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.123 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe8a:2b5f prefixlen 64 scopeid 0x20<link> ether 00:0c:29:8a:2b:5f txqueuelen 1000 (Ethernet) RX packets 821 bytes 84238 (82.2 KiB) RX errors 0 dropped 2 overruns 0 frame 0 TX packets 143 bytes 18221 (17.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #master2 IP [root@k8s-master2 ~]# ifconfig ens32 ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.124 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe77:dc9c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:77:dc:9c txqueuelen 1000 (Ethernet) RX packets 815 bytes 81627 (79.7 KiB) RX errors 0 dropped 2 overruns 0 frame 0 TX packets 158 bytes 20558 (20.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #node01 IP [root@k8s-node01 ~]# ifconfig ens32 ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.125 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe80:7949 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:80:79:49 txqueuelen 1000 (Ethernet) RX packets 830 bytes 85270 (83.2 KiB) RX errors 0 dropped 2 overruns 0 frame 0 TX packets 152 bytes 19376 (18.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #node02 IP [root@k8s-node02 ~]# ifconfig ens32 ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.126 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe7a:e67b prefixlen 64 scopeid 0x20<link> ether 00:0c:29:7a:e6:7b txqueuelen 1000 (Ethernet) RX packets 867 bytes 87775 (85.7 KiB) RX errors 0 dropped 2 overruns 0 frame 0 TX packets 157 bytes 19866 (19.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #同步時間 yum -y install ntp systemctl start ntpd ntpdate cn.pool.ntp.org #關閉selinux setenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config getenforce #關閉swap swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab cat /etc/fstab #設置內核 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf #準備部署目錄 mkdir -p /opt/kubernetes/{bin,cfg,ssl,log} echo 'export PATH=/opt/kubernetes/bin:$PATH' > /etc/profile.d/k8s.sh source /etc/profile.d/k8s.sh

CA證書建立和分發:

從k8s的1.8版本開始,K8S系統各組件須要使用TLS證書對通訊進行加密。每個K8S集羣都須要獨立的CA證書體系。CA證書有如下三種:easyrsa、openssl、cfssl。這裏使用cfssl證書,也是目前使用最多的,相對來講配置簡單一些,經過json的格式,把證書相關的東西配置進去便可。這裏使用cfssl的版本爲1.2版本。git

一、安裝CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
mv cfssljson_linux-amd64  /usr/local/bin/cfssljson mv cfssl_linux-amd64 /usr/local/bin/cfssl
二、初始化cfssl
mkdir ssl && cd ssl
cfssl print-defaults config > config.json cfssl print-defaults csr > csr.json
三、建立用來生成CA文件的json配置文件
[root@k8s-master1 ssl]# vim ca-config.json
{
  "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } }

signing: 表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE; 
server auth: 表示 client 能夠用該 CA 對 server 提供的證書進行驗證; 
client auth: 表示 server 能夠用該 CA 對 client 提供的證書進行驗證;github

四、建立用來生成CA證書籤名請求的json配置文件
[root@k8s-master1 ssl]# vim ca-csr.json
{
  "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }

「CN」:Common Name,kube-apiserver 從證書中提取該字段做爲請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法; 
「O」:Organization,kube-apiserver 從證書中提取該字段做爲請求用戶所屬的組 (Group);算法

五、生成CA證書(ca.pem)和密鑰(ca-key.pem)
[root@k8s-master1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2018/12/14 13:36:34 [INFO] generating a new CA key and certificate from CSR 2018/12/14 13:36:34 [INFO] generate received request 2018/12/14 13:36:34 [INFO] received CSR 2018/12/14 13:36:34 [INFO] generating key: rsa-2048 2018/12/14 13:36:34 [INFO] encoded CSR 2018/12/14 13:36:34 [INFO] signed certificate with serial number 685737089592185658867716737752849077098687904892

將證書和密鑰分發到其餘節點

cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.0.125:/opt/kubernetes/ssl scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.0.126:/opt/kubernetes/ssl

ETCD部署

全部持久化的狀態信息以KV的形式存儲在ETCD中。相似zookeeper,提供分佈式協調服務。之因此說kubenetes各個組件是無狀態的,就是由於其中把數據都存放在ETCD中。因爲ETCD支持集羣,這裏在三臺主機上都部署上ETCD。docker

建立etcd證書籤名請求
[root@k8s-master1 ssl]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.0.123",
    "192.168.0.125",
    "192.168.0.126"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "System"
    }
  ]
}
生成etcd證書和私鑰
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
  -ca-key=/opt/kubernetes/ssl/ca-key.pem \
  -config=/opt/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
將證書複製到其餘節點
cp etcd*.pem /opt/kubernetes/ssl/
scp etcd*.pem 192.168.0.125:/opt/kubernetes/ssl/
scp etcd*.pem 192.168.0.126:/opt/kubernetes/ssl/
準備二進制文件

二進制包下載地址:https://github.com/coreos/etcd/releasesjson

cd /usr/local/src/
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar xf etcd-v3.3.10-linux-amd64.tar.gz 
cd etcd-v3.3.10-linux-amd64
cp etcd etcdctl /opt/kubernetes/bin/
scp etcd etcdctl 192.168.0.125:/opt/kubernetes/bin/
scp etcd etcdctl 192.168.0.126:/opt/kubernetes/bin/
設置etcd配置文件

[root@k8s-master1 ~]# vim /opt/kubernetes/cfg/etcd.confvim

#[member]
ETCD_NAME="etcd-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.123:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.123:2379,https://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.123:2380"
ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.0.123:2380,etcd-node2=https://192.168.0.125:2380,etcd-node3=https://192.168.0.126:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.123:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"api

ETCD_NAME 節點名稱
ETCD_DATA_DIR 數據目錄
ETCD_LISTEN_PEER_URLS 集羣通訊監聽地址
ETCD_LISTEN_CLIENT_URLS 客戶端訪問監聽地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集羣通告地址
ETCD_ADVERTISE_CLIENT_URLS 客戶端通告地址
ETCD_INITIAL_CLUSTER 集羣節點地址
ETCD_INITIAL_CLUSTER_TOKEN 集羣Token
ETCD_INITIAL_CLUSTER_STATE 加入集羣的當前狀態,new是新集羣,existing表示加入已有集羣瀏覽器

建立etcd系統服務

[root@k8s-master1 ~]# vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target

將配置文件和服務文件拷貝到其餘兩個節點 (記得修改對應的IP和ETCD_NAME)
scp /opt/kubernetes/cfg/etcd.conf 192.168.0.125:/opt/kubernetes/cfg
scp /opt/kubernetes/cfg/etcd.conf 192.168.0.126:/opt/kubernetes/cfg
scp /usr/lib/systemd/system/etcd.service 192.168.0.125:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service 192.168.0.126:/usr/lib/systemd/system/
建立ETCD工做目錄
mkdir /var/lib/etcd
啓動ETCD服務
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
集羣驗證
etcdctl --endpoints=https://192.168.0.123:2379,https://192.168.0.125:2379,https://192.168.0.126:2379 \
  --ca-file=/opt/kubernetes/ssl/ca.pem \
  --cert-file=/opt/kubernetes/ssl/etcd.pem \
  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

返回以下信息說明ETCD集羣配置正常:

member 3126a455a15179c6 is healthy: got healthy result from https://192.168.0.125:2379
member 40601cc6f27d1bf1 is healthy: got healthy result from https://192.168.0.123:2379
member 431013e88beab64c is healthy: got healthy result from https://192.168.0.126:2379
cluster is healthy
相關文章
相關標籤/搜索