手動部署 kubernetes 1.9 記錄

前言html

目前 kubernetes 正式版本已經到1.10版本。由於前面有大佬(漠然)已經採完坑,因此本身也試着部署 kubernetes 1.9 體驗下該版本的新特性。對於前面部署的 kubernetes 1.7 HA版本而言,本質上變化不大。主要是總結一下某些參數的變更以及其餘組件的部署。node

1、相關配置變動nginx

1.1 關於 API SERVER 配置出現的變更git

  • 移除了 --runtime-config=rbac.authorization.k8s.io/v1beta1 配置,由於 RBAC 已經穩定,被歸入了 v1 api,再也不須要指定開啓;
  • --authorization-mode 受權模型增長了 Node 參數,由於 1.8 後默認 system:node role 不會自動授予 system:nodes 組;
  • 其中准入控制器(admission control)選項名稱變爲了 --enable-admission-plugins,--admission-control 同時增長了NodeRestriction 參數;
  • 增長 --audit-policy-file 參數用於指定高級審計配置;
  • 移除 --experimental-bootstrap-token-auth 參數,更換爲 --enable-bootstrap-token-auth;

我的apiserver配置參考以下:github

[root@master01 ~]# cat /etc/kubernetes/apiserver 
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
 
# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.133.128 --insecure-bind-address=127.0.0.1 --bind-address=192.168.133.128"
 
# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"
 
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
 
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.133.128:2379,https://192.168.133.129:2379,https://192.168.133.130:2379"
 
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
 
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
               --anonymous-auth=false \
               --kubelet-https=true \
               --enable-bootstrap-token-auth \
               --token-auth-file=/etc/kubernetes/ssl/token.csv \
               --service-node-port-range=30000-50000 \
               --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
               --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --audit-policy-file=/etc/kubernetes/ssl/audit-policy.yaml \
               --etcd-quorum-read=true \
               --storage-backend=etcd3 \
               --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
               --etcd-certfile=/etc/etcd/ssl/etcd.pem \
               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
               --etcd-compaction-interval=5m0s \
               --enable-swagger-ui=true \
               --enable-garbage-collector \
               --enable-logs-handler \
               --kubelet-timeout=3s \
               --apiserver-count=3 \
               --audit-log-maxage=30 \
               --audit-log-maxbackup=3 \
               --audit-log-maxsize=100 \
               --audit-log-path=/var/log/kube-audit/audit.log \
               --event-ttl=1h \
               --enable-swagger-ui \
               --log-flush-frequency=5s"

1.2 關於 controller-manager 配置變更docker

  • 默認已開啓了證書輪換能力用於自動簽署 kueblet 證書,而且證書時間也設置了 10 年,可自行調整(--experimental-cluster-signing-duration=86700h0m0s);
  • 增長了 --controllers (--controllers=*,bootstrapsigner,tokencleaner)選項以指定開啓所有控制器;

我的controller-manager配置參考以下:bootstrap

# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
                              --service-cluster-ip-range=10.254.0.0/16 \
                              --cluster-name=kubernetes \
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --controllers=*,bootstrapsigner,tokencleaner \
                              --deployment-controller-sync-period=10s \
                              --experimental-cluster-signing-duration=86700h0m0s \
                              --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --leader-elect=true \
                              --node-monitor-grace-period=40s \
                              --node-monitor-period=5s \
                              --pod-eviction-timeout=5m0s \
                              --feature-gates=RotateKubeletServerCertificate=true"

1.3 關於 scheduler 配置變更api

  • 恢復默認的領導選舉(leader-elect=true),參考v1.9.5變動日誌;

我的scheduler配置參考以下:tomcat

[root@master01 ~]# cat /etc/kubernetes/scheduler 
###
# kubernetes scheduler config
 
# default config should be adequate
 
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0 \
		     --algorithm-provider=DefaultProvider"

更多細節請關注changelog以及官方手冊:https://v1-9.docs.kubernetes.io/docs/reference/generated/kubelet/安全

2、網絡插件部署

2.1 Calico 簡介

Calico 是一個純三層的數據中心網咯方案,不須要overlay。而且對OpenStack、kubernetes、AWS等有良好的集成。Calico 在每一個節點利用Linux Kernel實現一個高效的vRouter來負責數據轉發,而每一個vRouter經過BGP協議負責把本身運行的workload路由信息向整個Calico網絡內傳播。小規模部署能夠直接互聯,大規模部署下可經過制定的BGP route reflector來完成。這樣保證最終全部的workload之間的數據流量均可以經過IP路由的方式完成互聯。Calico節點組網能夠直接利用數據中心的網絡結構(不管是L2仍是L3),無需額外的NAT或者Overlay Network。

此外,Calico基於iptables還提供了豐富而靈活的網絡Policy,保證經過各個節點上的ACLs來提供Workload的多租戶隔離、安全組以及其餘可達性限制等功能。

Calico 核心組件:

  • Felix,Calico Agent,跑在每臺須要運行Workload節點上,主要負責配置路由及ACL等信息來確保Endpoint的連通狀態;
  • etcd,分佈式鍵值存儲,主要負責網絡元數據一致性,確保Calico網絡狀態的準確性;
  • BGP Client(BIRD),主要負責把Felix寫入Kernel的路由信息分發到當前Calico網絡,確保Workload間的通訊有效性;
  • BGP Route Reflector(BIRD),大規模部署時使用,摒棄全部節點互聯的mesh模式,經過一個或者多個BGP Route Reflector來完成集中式的路由分發;
  • calico/calico-ipam,主要用做kubernetes的CNI插件;

IP-in-IP
Calico控制平面的設計要求物理網絡得是L2 Fabric,這樣vRouter間都是直接可達的,路由不須要把物理設備當作下一跳。爲了支持L3 Fabric,Calico推出了IPinIP的選項。

2.2 Calico 安裝

關於calico的部署,官方推薦 "Standard Hosted Install" 安裝方式,及全部組件經過kubernetes去管理服務。還有另外一種就是在Kubernetes上安裝Calico以集成定製配置管理所需的組件。關於Standard Hosted Install方式安裝就是將 calico-node/calico-cni/calico-kube-controller 所有經過kubernetes去管理、部署,而另外一種方式 systemd 經過 docker 啓動calico-node,而 calico-cni 則是經過二進制文件以及手動設置網絡來實現的。calico-kube-controller 仍是經過 kubernetes 部署。具體安裝配置參考 Calico 官方文檔。

2.2.1 建立 calico-node systemd文件

cat << EOF > /usr/lib/systemd/system/calico-node.service
[Unit]
Description=calico node
After=docker.service
Requires=docker.service

[Service]
User=root
Environment=ETCD_ENDPOINTS=https://172.16.204.131:2379
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run   --net=host --privileged --name=calico-node \\
                                -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \\
                                -e ETCD_CA_CERT_FILE=/etc/etcd/ssl/etcd-root-ca.pem \\
                                -e ETCD_CERT_FILE=/etc/etcd/ssl/etcd.pem \\
                                -e ETCD_KEY_FILE=/etc/etcd/ssl/etcd-key.pem \\
                                -e NODENAME=node01 \\
                                -e IP= \\
                                -e IP6= \\
                                -e NO_DEFAULT_POOLS= \\
                                -e AS= \\
                                -e CALICO_IPV4POOL_CIDR=10.20.0.0/16 \\
                                -e CALICO_IPV4POOL_IPIP=always \\
                                -e CALICO_LIBNETWORK_ENABLED=true \\
                                -e CALICO_NETWORKING_BACKEND=bird \\
                                -e CALICO_DISABLE_FILE_LOGGING=true \\
                                -e FELIX_IPV6SUPPORT=false \\
                                -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \\
                                -e FELIX_LOGSEVERITYSCREEN=info \\
                                -v /etc/etcd/ssl/etcd-root-ca.pem:/etc/etcd/ssl/etcd-root-ca.pem \\
                                -v /etc/etcd/ssl/etcd.pem:/etc/etcd/ssl/etcd.pem \\
                                -v /etc/etcd/ssl/etcd-key.pem:/etc/etcd/ssl/etcd-key.pem \\
                                -v /var/run/calico:/var/run/calico \\
                                -v /lib/modules:/lib/modules \\
                                -v /run/docker/plugins:/run/docker/plugins \\
                                -v /var/run/docker.sock:/var/run/docker.sock \\
                                -v /var/log/calico:/var/log/calico \\
                                calico/node:v2.6.9
ExecStop=/usr/bin/docker rm -f calico-node
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

啓動calico-node服務

systemctl daemon-reload
systemctl start calico-node

2.2.2 編輯calico.yml文件

下載相關文件

wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml
wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml

修改calico.yml文件

## 更改成本身的etcd集羣
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://172.16.204.131:2379\"@gi' calico.yaml

export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`

sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml

## 禁止kubernetes啓動calico-node容器
sed -i '106,197s@.*@#&@gi' calico.yaml

2.2.3 修改 kubelet 配置文件

[root@node01 ~]# cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=172.16.204.132"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.204.132"

# location of the api-server
# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# Add your own!
# KUBELET_ARGS="--cgroup-driver=systemd"
KUBELET_ARGS="--cgroup-driver=systemd \
              --network-plugin=cni \
              --cni-conf-dir=/etc/cni/net.d \
              --cni-bin-dir=/opt/cni/bin \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --fail-swap-on=false \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode=promiscuous-bridge \
              --serialize-image-pulls=false \
              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

添加如上內容,而後重啓服務

systemctl daemon-reload
systemctl restart kubelet

2.2.4 啓動相關容器

## 建立RBAC
kubectl apply -f rbac.yaml
## 啓動calico-cni以及kube-controller容器
kubectl create -f calico.yaml

2.2.5 Calico 網絡測試

建立一個簡單demo進行測試

cat << EOF > demo.deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: demo-tomcat
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: tomcat:9.0.7
        ports:
        - containerPort: 80
EOF
kubectl create -f demo.deploy.yml
kubetcl get pods -o wide --all-namespaces

測試

[root@master01 calico]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP                NODE
default       demo-tomcat-56697dcc5b-2jv69               1/1       Running   0          34s       10.20.196.136     192.168.133.129
default       demo-tomcat-56697dcc5b-lmc2h               1/1       Running   0          35s       10.20.140.74      192.168.133.130
default       demo-tomcat-56697dcc5b-whbg7               1/1       Running   0          34s       10.20.140.73      192.168.133.130
kube-system   calico-kube-controllers-684fcf8587-66kxn   1/1       Running   0          43m       192.168.133.129   192.168.133.129
kube-system   calico-node-hpr9c                          1/1       Running   0          43m       192.168.133.129   192.168.133.129
kube-system   calico-node-jvpf2                          1/1       Running   0          43m       192.168.133.130   192.168.133.130
[root@master01 calico]# kubectl exec -it demo-tomcat-56697dcc5b-2jv69 bash
root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# pin
pinentry         pinentry-curses  ping             ping6            pinky            
root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.140.74
PING 10.20.140.74 (10.20.140.74): 56 data bytes
64 bytes from 10.20.140.74: icmp_seq=0 ttl=62 time=0.673 ms
64 bytes from 10.20.140.74: icmp_seq=1 ttl=62 time=0.398 ms
^C--- 10.20.140.74 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.398/0.536/0.673/0.138 ms
root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.140.73
PING 10.20.140.73 (10.20.140.73): 56 data bytes
64 bytes from 10.20.140.73: icmp_seq=0 ttl=62 time=0.844 ms
64 bytes from 10.20.140.73: icmp_seq=1 ttl=62 time=0.348 ms
^C--- 10.20.140.73 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.348/0.596/0.844/0.248 ms
root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.196.136
PING 10.20.196.136 (10.20.196.136): 56 data bytes
64 bytes from 10.20.196.136: icmp_seq=0 ttl=64 time=0.120 ms
64 bytes from 10.20.196.136: icmp_seq=1 ttl=64 time=0.068 ms
^C--- 10.20.196.136 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.068/0.094/0.120/0.026 ms

總結:

關於 k8s 網絡插件的選擇,沒有什麼完整的方案。主要仍是根據本身的環境進行決策,主要是 Calico 坑其實比較多。這裏提供幾個實質性比較強的參考連接:
https://feisky.gitbooks.io/sdn/basic/tcpip.html#tcpip%E7%BD%91%E7%BB%9C%E6%A8%A1%E5%9E%8B
http://www.shushilvshe.com/data/kubernete-calico.html#data/kubernete-calico
http://www.51yimo.com/2017/09/26/calico-install-on-kubernetes/

3、安裝CoreDNS

3.1 CoreDNS 簡介

沒啥說的,其實就是一個取代kube-dns插件的。

3.2 部署安裝

首先下載 delopy.shcoredns.yaml.sed 文件,而後直接安裝

./deploy.sh -r 10.254.0.0/16 -i 10.254.0.2 -d cluster.local | kubectl apply -f - 

提示:關於腳本的內容可能會由於你使用的版本不一樣而參數不一樣,因此儘可能在作的時候擼一眼腳本的內容。

[root@master01 coredns]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-684fcf8587-5ndks   1/1       Running   1          11d
kube-system   calico-node-4wskw                          1/1       Running   1          11d
kube-system   calico-node-sbngf                          1/1       Running   1          11d
kube-system   coredns-64b597b598-fmh85                   1/1       Running   0          57s
kube-system   coredns-64b597b598-jf88d                   1/1       Running   0          57s

3.3 驗證CoreDNS的可用性

部署測試nginx pod進行測試

cat > my-nginx.yaml << EOF
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: my-nginx
 spec:
   replicas: 2
   template:
     metadata:
       labels:
         run: my-nginx
     spec:
       containers:
       - name: my-nginx
         image: nginx:1.7.9
         ports:
         - containerPort: 80
EOF

kubectl create -f my-nginx.yaml

建立my-nginx pod的service而且查看當前的cluster ip

##建立my-nginx pod service
kubectl expose deploy my-nginx

##查看建立的service
[root@master01 ~]# kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.254.0.1     <none>        443/TCP         12d
default       my-nginx     ClusterIP   10.254.37.75   <none>        80/TCP          13s
kube-system   kube-dns     ClusterIP   10.254.0.2     <none>        53/UDP,53/TCP   4m

驗證CoreDNS可用性

[root@master01 ~]# kubectl exec -it my-nginx-56b48db847-g8fr2 /bin/bash
root@my-nginx-56b48db847-g8fr2:/# cat /etc/resolv.conf
nameserver 10.254.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local.
options ndots:5
root@my-nginx-56b48db847-g8fr2:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (10.254.37.75): 48 data bytes
^C--- my-nginx.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss
root@my-nginx-56b48db847-g8fr2:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes
^C--- kubernetes.default.svc.cluster.local ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
root@my-nginx-56b48db847-g8fr2:/# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes
^C--- kube-dns.kube-system.svc.cluster.local ping statistics ---
6 packets transmitted, 0 packets received, 100% packet loss
root@my-nginx-56b48db847-g8fr2:/# curl -I my-nginx
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Tue, 08 May 2018 07:27:13 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT
Connection: keep-alive
ETag: "54999765-264"
Accept-Ranges: bytes

root@my-nginx-56b48db847-g8fr2:/# curl my-nginx.default.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}

...省略其餘...

從上面能夠看出,當前是可以解析service對應的cluster ip;

相關文章
相關標籤/搜索