幾張比較不錯的圖node
集羣功能各模塊功能描述:linux
Master節點:
Master節點上面主要由四個模塊組成,APIServer,schedule,controller-manager,etcdgit
APIServer: APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給APIServer處理後再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和APIServer交互的。github
schedule: schedule負責調度Pod到合適的Node上,若是把scheduler當作一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法,一樣也保留了接口。用戶根據本身的需求定義本身的調度算法。web
controller manager: 若是APIServer作的是前臺的工做的話,那麼controller manager就是負責後臺的。每個資源都對應一個控制器。而control manager就是負責管理這些控制器的,好比咱們經過APIServer建立了一個Pod,當這個Pod建立成功後,APIServer的任務就算完成了。算法
etcd:etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。docker
Node節點:
每一個Node節點主要由三個模板組成:kublet, kube-proxyjson
kube-proxy: 該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP鏈接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集羣中service和endpoint對象數據的動態變化,而且維護一個service到endpoint的映射關係,從而保證了後端pod的IP變化不會對訪問者形成影響,另外,kube-proxy還支持session affinity。bootstrap
kublet:kublet是Master在每一個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的全部容器,可是若是容器不是經過kubernetes建立的,它並不會管理。本質上,它負責使Pod的運行狀態與指望的狀態一致。vim
os:centos 7.5.1804
IP | hostname | 服務 |
---|---|---|
4.71 | master | kube-apiservice,kube-scheduler,kube-controller-manager,etcd |
4.72 | node1 | kubelet,kube-proxy,flanneld,docker |
4.76 | node2 | kubelet,kube-proxy,flanneld,docker |
systemctl stop firewalld setenforce 0 (臨時關閉) vi /etc/selinux/config SELINUX=disabled
Client Binaries https://dl.k8s.io/v1.14.0/kubernetes-client-linux-amd64.tar.gz Server Binaries https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz Node Binaries https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz etcd https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz flannel https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz wget https://dl.k8s.io/v1.14.0/kubernetes-client-linux-amd64.tar.gz wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p cd /k8s/etcd/ssl/
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "192.168.4.71", "192.168.4.72", "192.168.4.76" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
[root@t71 cfg]# vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.71:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.71:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.71:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.71:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.71:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
mkdir /data1/etcd [root@t71 cfg]# vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.targe
systemctl daemon-reload systemctl enabel etcd systemctl start etcd
[root@t71 bin]# etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.71:2379" cluster-health member ac829673d2b22824 is healthy: got healthy result from https://192.168.4.71:2379 cluster is healthy [root@t71 bin]#
cd /k8s/kubernetes/ssl cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.254.0.1", "127.0.0.1", "192.168.4.71", "192.168.4.72", "192.168.4.76", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 663eb46fb81c4cf2a9bedb84bea03582 vim /k8s/kubernetes/cfg/token.csv 663eb46fb81c4cf2a9bedb84bea03582,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.4.71:2379 \ --bind-address=192.168.4.71 \ --secure-port=6443 \ --advertise-address=192.168.4.71 \ --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-scheduler systemctl enable kube-scheduler
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager
vim /etc/profile export PATH=/k8s/kubernetes/bin:$PATH source /etc/profile
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
kublet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如exec、run、logs 等; kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況; 爲確保安全,只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如apiserver、heapster)
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz tar zxvf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
scp *.pem 10.2.8.65:$PWD
[root@t72 cfg]# vim environment.sh #!/bin/bash #建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=663eb46fb81c4cf2a9bedb84bea03582 KUBE_APISERVER="https://192.168.4.71:6443" #設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig #設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \ --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
sh environment.sh
[root@t72 cfg]# vim kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.72 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.254.0.10"] clusterDomain: cluster.local. failSwapOn: false #authentication: # anonymous: # enabled: true authentication: anonymous: enabled: true # Defaults to false as of 1.10 webhook: enabled: false # Deafults to true as of 1.10 authorization: mode: AlwaysAllow
[root@t72 cfg]# vim kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.72 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@t72 cfg]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
systemctl daemon-reload systemctl start kubelet systemctl enable kubelet
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
接受node
# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
再看CRS
kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
[root@t72 cfg]# vim kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.72 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
[root@t72 cfg]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy
注意期間要是kubelet,kube-proxy配置錯誤,好比監聽IP或者hostname錯誤致使node not found,須要刪除kubelet-client證書,重啓kubelet服務,重啓認證csr便可
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.71:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
注:寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 –cluster-cidr 參數值一致
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
[root@t72 cfg]# vim flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.71:2379, -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
[root@t72 cfg]# vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS #ExecStart=/k8s/kubernetes/bin/flanneld $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
注:mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,後續 docker 啓動時 使用這個文件中的環境變量配置 docker0 網橋; flanneld 使用系統缺省路由所在的接口與其它節點通訊,對於有多個網絡接口(如內網和公網)的節點,能夠用 -iface 參數指定通訊接口; flanneld 運行時須要 root 權限;
[root@t72 cfg]# cat /usr/lib/systemd/system/docker.service | grep -v "#" [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl stop docker systemctl start flanneld systemctl enable flanneld systemctl start docker systemctl restart kubelet systemctl restart kube-proxy
ip addr
kubectl get nodes -o wide