Kubernetes容器集羣管理node
Kubernetes是Google在2014年6月開源的一個容器集羣管理系統,使用Go語言開發,Kubernetes也叫K8S。
K8S是Google內部一個叫Borg的容器集羣管理系統衍生出來的,Borg已經在Google大規模生產運行十年之久。
K8S主要用於自動化部署、擴展和管理容器應用,提供了資源調度、部署管理、服務發現、擴容縮容、監控等一整套功能。
2015年7月,Kubernetes v1.0正式發佈。
Kubernetes目標是讓部署容器化應用簡單高效。
官方網站:www.kubernetes.iolinux
Pod中容器之間共享數據,可使用數據卷。nginx
容器內服務可能進程堵塞沒法處理請求,能夠設置監控檢查策略保證應用健壯性。git
控制器維護着Pod副本數量,保證一個Pod或一組同類的Pod數量始終可用。github
根據設定的指標(CPU利用率)自動縮放Pod副本數。算法
使用環境變量或DNS服務插件保證容器中程序發現Pod入口訪問地址。docker
一組Pod副本分配一個私有的集羣IP地址,負載均衡轉發請求到後端容器。在集羣內部其餘Pod可經過這個ClusterIP訪問應用。數據庫
更新服務不中斷,一次更新一個Pod,而不是同時刪除整個服務。express
經過文件描述部署服務,使得應用程序部署變得更高效。apache
Node節點組件集成cAdvisor資源收集工具,可經過Heapster彙總整個集羣節點資源數據,而後存儲到InfluxDB時序數據庫,再由Grafana展現。
支持角色訪問控制(RBAC)認證受權等策略。
基本對象:
Pod是最小部署單元,一個Pod有一個或多個容器組成,Pod中容器共享存儲和網絡,在同一臺Docker主機上運行。
Service一個應用服務抽象,定義了Pod邏輯集合和訪問這個Pod集合的策略。
Service代理Pod集合對外表現是爲一個訪問入口,分配一個集羣IP地址,來自這個IP的請求將負載均衡轉發後端Pod中的容器。
Service經過Lable Selector選擇一組Pod提供服務。
數據卷,共享Pod中容器使用的數據。
命名空間將對象邏輯上分配到不一樣Namespace,能夠是不一樣的項目、用戶等區分管理,並設定控制策略,從而實現多租戶。
命名空間也稱爲虛擬集羣。
標籤用於區分對象(好比Pod、Service),鍵/值對存在;每一個對象能夠有多個標籤,經過標籤關聯對象。
基於基本對象更高層次抽象:
下一代Replication Controller。確保任何給定時間指定的Pod副本數量,並提供聲明式更新等功能。
RC與RS惟一區別就是lable selector支持不一樣,RS支持新的基於集合的標籤,RC僅支持基於等式的標籤。
Deployment是一個更高層次的API對象,它管理ReplicaSets和Pod,並提供聲明式更新等功能。
官方建議使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,這就意味着可能永遠不須要直接操做ReplicaSet對象。
StatefulSet適合持久性的應用程序,有惟一的網絡標識符(IP),持久存儲,有序的部署、擴展、刪除和滾動更新。
DaemonSet確保全部(或一些)節點運行同一個Pod。當節點加入Kubernetes集羣中,Pod會被調度到該節點上運行,當節點從集羣中
移除時,DaemonSet的Pod會被刪除。刪除DaemonSet會清理它全部建立的Pod。
一次性任務,運行完成後Pod銷燬,再也不從新啓動新容器。還能夠任務定時運行。
系統架構圖及組件功能
Master 組件:
Kubernetes API,集羣的統一入口,各組件協調者,以HTTP API提供接口服務,全部對象資源的增刪改查和監聽操做都交給APIServer處理後再提交給Etcd存儲。
處理集羣中常規後臺任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。
根據調度算法爲新建立的Pod選擇一個Node節點。
Node 組件:
kubelet是Master在Node節點上的Agent,管理本機運行容器的生命週期,好比建立容器、Pod掛載數據卷、
下載secret、獲取容器和節點狀態等工做。kubelet將每一個Pod轉換成一組容器。
在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工做。
運行容器。
第三方服務:
分佈式鍵值存儲系統。用於保持集羣狀態,好比Pod、Service等對象信息。
下圖清晰代表了Kubernetes的架構設計以及組件之間的通訊協議。
好了,不BB!。。。
集羣部署
一、環境規劃
二、安裝Docker
三、自籤TLS證書
四、部署Etcd集羣
五、部署Flannel網絡
六、建立Node節點kubeconfig文件
七、獲取K8S二進制包
八、運行Master組件
九、運行Node組件
十、查詢集羣狀態
十一、啓動一個測試示例
十二、部署Web UI (Dashboard)
Kubernetes容器集羣管理
角色 | IP | 組件 | 推薦配置 |
master | 192.168.247.211 |
kube-apiserver |
CPU 2核+ 2G內存+ |
node01 | 192.168.247.212 |
kubelet kube-proxy docker flannel etcd |
|
node02 | 192.168.247.213 | kubelet kube-proxy docker flannel etcd |
軟件 | 版本 |
Linux操做系統 | CentOS7.4_x64 |
Kubernetes | 1.11.7 |
Docker | 17.12-ce |
Etcd | 3.0 |
Kubernetes發佈地址:https://github.com/kubernetes/kubernetes/releases
cat <<EOF >>/etc/hosts 192.168.247.211 master 192.168.247.212 node01 192.168.247.213 node02 EOF systemctl stop firewalld systemctl disable firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config swapoff -a sed -i 's/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g' /etc/fstab yum -y install ntp systemctl enable ntpd systemctl start ntpd ntpdate -u cn.pool.ntp.org hwclock --systohc timedatectl set-timezone Asia/Shanghai yum install wget vim lsof net-tools lrzsz -y curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum makecache #設置內核參數 echo "* soft nofile 190000" >> /etc/security/limits.conf echo "* hard nofile 200000" >> /etc/security/limits.conf echo "* soft nproc 252144" >> /etc/security/limits.conf echo "* hadr nproc 262144" >> /etc/security/limits.conf tee /etc/sysctl.conf <<-'EOF' # System default settings live in /usr/lib/sysctl.d/00-system.conf. # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file # # For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv4.tcp_tw_recycle = 0 net.ipv4.ip_local_port_range = 10000 61000 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_fin_timeout = 30 net.ipv4.ip_forward = 1 net.core.netdev_max_backlog = 2000 net.ipv4.tcp_mem = 131072 262144 524288 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_low_latency = 0 net.core.rmem_default = 256960 net.core.rmem_max = 513920 net.core.wmem_default = 256960 net.core.wmem_max = 513920 net.core.somaxconn = 2048 net.core.optmem_max = 81920 net.ipv4.tcp_mem = 131072 262144 524288 net.ipv4.tcp_rmem = 8760 256960 4088000 net.ipv4.tcp_wmem = 8760 256960 4088000 net.ipv4.tcp_keepalive_time = 1800 net.ipv4.tcp_sack = 1 net.ipv4.tcp_fack = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_syn_retries = 1 EOF cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system sysctl -p reboot
# step 1: 安裝必要的一些系統工具 yum install -y yum-utils device-mapper-persistent-data lvm2 unzip # Step 2: 添加軟件源信息 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Step 3: 更新並安裝 Docker-CE yum makecache fast yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y yum install docker-ce-17.03.2.ce-1.el7.centos -y # Step 4: 開啓Docker服務 service docker start systemctl enable docker
# 注意:
# 官方軟件源默認啓用了最新的軟件,您能夠經過編輯軟件源的方式獲取各個版本的軟件包。例如官方並無將測試版本的軟件源置爲可用,你能夠經過如下方式開啓。同理能夠開啓各類測試版本等。 # vim /etc/yum.repos.d/docker-ce.repo # 將 [docker-ce-test] 下方的 enabled=0 修改成 enabled=1 # # 安裝指定版本的Docker-CE: # Step 1: 查找Docker-CE的版本: # yum list docker-ce.x86_64 --showduplicates | sort -r # Loading mirror speeds from cached hostfile # Loaded plugins: branch, fastestmirror, langpacks # docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable # docker-ce.x86_64 17.03.1.ce-1.el7.centos @docker-ce-stable # docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable # Available Packages # Step2 : 安裝指定版本的Docker-CE: (VERSION 例如上面的 17.03.0.ce.1-1.el7.centos) # sudo yum -y install docker-ce-[VERSION] # 經過經典網絡、VPC網絡內網安裝時,用如下命令替換Step 2中的命令 # 經典網絡: # sudo yum-config-manager --add-repo http://mirrors.aliyuncs.com/docker-ce/linux/centos/docker-ce.repo # VPC網絡: # sudo yum-config-manager --add-repo http://mirrors.could.aliyuncs.com/docker-ce/linux/centos/docker-ce.repo #設置加速器 cat << EOF > /etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com"], "insecure-registries":["192.168.247.210:5000"] } EOF
組件 | 使用的證書 |
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin.pem,admin-key.pem |
在master安裝證書生成工具 cfssl :
mkdir ssl;cd ssl wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 --no-check-certificate wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
執行certificate.sh生成證書
[root@master ssl]# cat certificate.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.247.211", "192.168.247.212", "192.168.247.213", "10.10.10.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
注意這裏先把ssl這個目錄拷貝一份,由於後面RBAC受權的時候還須要運用到這些生成的證書!!
而後執行如下命令只留下pem證書
ls |grep -v "pem"|xargs rm -fr
etcd是一個高可用的鍵值存儲系統,主要用於共享配置和服務發現。etcd是由CoreOS開發並維護的,靈感來自於 ZooKeeper 和 Doozer,它使用Go語言編寫,並經過Raft一致性算法處理日誌複製以保證強一致性。Raft是一個新的一致性算法,適用於分佈式系統的日誌複製,Raft經過選舉的方式來實現一致性。Google的容器集羣管理系統Kubernetes、開源PaaS平臺Cloud Foundry和CoreOS的Fleet都普遍使用了etcd。在分佈式系統中,如何管理節點間的狀態一直是一個難題,etcd像是專門爲集羣環境的服務發現和註冊而設計,它提供了數據TTL失效、數據改變監視、多值、目錄監聽、分佈式鎖原子操做等功能,能夠方便的跟蹤並管理集羣節點的狀態。
etcd的特性以下:
二進制包下載地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
部署(master,node01,node02)
mkdir -p /opt/kubernetes/{bin,cfg,ssl} [root@master ~]# tar -xf etcd-v3.2.12-linux-amd64.tar.gz [root@master ~]# mv etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin/ [root@master ~]# mv etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin/ [root@master ~]# cat /opt/kubernetes/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.247.211:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.247.211:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.211:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.211:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.247.211:2380,etcd02=https://192.168.247.212:2380,etcd03=https://192.168.247.213:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@master ~]# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/opt/kubernetes/cfg/etcd ExecStart=/opt/kubernetes/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-state=new \ --cert-file=/opt/kubernetes/ssl/server.pem \ --key-file=/opt/kubernetes/ssl/server-key.pem \ --peer-cert-file=/opt/kubernetes/ssl/server.pem \ --peer-key-file=/opt/kubernetes/ssl/server-key.pem \ --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@master ~]# cp ssl/server*pem ssl/ca*.pem /opt/kubernetes/ssl/ #製做免密登陸 ssh-keygen ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.247.212 ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.247.213 [root@master ~]# scp -r /opt/kubernetes/ 192.168.247.212:/opt/ [root@master ~]# scp -r /opt/kubernetes/ 192.168.247.213:/opt/ [root@master ~]# scp -r /usr/lib/systemd/system/etcd.service 192.168.247.212:/usr/lib/systemd/system/ [root@master ~]# scp -r /usr/lib/systemd/system/etcd.service 192.168.247.213:/usr/lib/systemd/system/ [root@master ~]# systemctl start etcd && systemctl enable etcd 修改node一、node2的/opt/kubernetes/cfg/etcd文件裏的ETCD_NAME參數。而後啓動!
etcd配置文件參數說明:
ETCD_NAME 節點名稱
ETCD_DATA_DIR 數據目錄
ETCD_LISTEN_PEER_URLS 集羣通訊監聽地址
ETCD_LISTEN_CLIENT_URLS 客戶端訪問監聽地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集羣通告地址
ETCD_ADVERTISE_CLIENT_URLS 客戶端通告地址
ETCD_INITIAL_CLUSTER 集羣節點地址
ETCD_INITIAL_CLUSTER_TOKEN 集羣Token
ETCD_INITIAL_CLUSTER_STATE 加入集羣的當前狀態,new是新集羣,existing表示加入已有集羣
查看集羣狀態:
# /opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \ cluster-health [root@master ssl]# /opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \ cluster-health member a6c341768b1e58b is healthy: got healthy result from https://192.168.247.211:2379 member 62b5a3c1db53387a is healthy: got healthy result from https://192.168.247.212:2379 member d0f8841f2d3e2788 is healthy: got healthy result from https://192.168.247.213:2379
Overlay Network :覆蓋網絡,在基礎網絡上疊加的一種虛擬網絡技術模式,該網絡中的主機經過虛擬鏈路鏈接起來。
VXLAN :將源數據包封裝到UDP中,並使用基礎網絡的IP/MAC做爲外層報文頭進行封裝,而後在以太網上傳輸,到達目的地後由隧道端點解封裝並將數據發送給目標地址。
Flannel :是Overlay網絡的一種,也是將源數據包封裝在另外一種網絡包裏面進行路由轉發和通訊,目前已經支持UDP、VXLAN、AWS VPC和GCE路由等數據轉發方式。
多主機容器網絡通訊其餘主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。
集羣部署 – 部署Flannel網絡(node01,node02)
1 )寫入分配的子網段到 etcd ,供 flanneld 使用
1)首先設置子網
[root@master ssl]# /opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
2 )下載二進制包
# wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz tar -xf flannel-v0.9.1-linux-amd64.tar.gz scp flanneld mk-docker-opts.sh 192.168.247.212:/opt/kubernetes/bin/ scp flanneld mk-docker-opts.sh 192.168.247.213:/opt/kubernetes/bin/
3 )配置 Flannel
[root@node01 cfg]# pwd /opt/kubernetes/cfg [root@node01 cfg]# cat flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
4 ) systemd 管理 Flannel
[root@node01 cfg]# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target 5 )配置 Docker 啓動指定子網段 [root@node01 cfg]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
6 ) 啓動(必定要按這個順序)
[root@node01 cfg]# systemctl daemon-reload [root@node01 cfg]# systemctl restart flanneld && systemctl enable flanneld [root@node01 cfg]# systemctl restart docker
同步到其餘node後啓動
cd /opt/kubernetes/cfg/ scp flanneld 192.168.247.212:/opt/kubernetes/cfg/ scp flanneld 192.168.247.213:/opt/kubernetes/cfg/ scp /usr/lib/systemd/system/flanneld.service 192.168.247.212:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/flanneld.service 192.168.247.213:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/docker.service 192.168.247.213:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/docker.service 192.168.247.212:/usr/lib/systemd/system/
7 測試
#列出集羣中的全部子網
[root@master ssl]# /opt/kubernetes/bin/etcdctl \ > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ > --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \ > ls /coreos.com/network/subnets /coreos.com/network/subnets/172.17.100.0-24 /coreos.com/network/subnets/172.17.57.0-24 /coreos.com/network/subnets/172.17.88.0-24
#查看子網對應的物理網口
[root@master ssl]# /opt/kubernetes/bin/etcdctl \ > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ > --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \ > get /coreos.com/network/subnets/172.17.57.0-24 {"PublicIP":"192.168.247.212","BackendType":"vxlan","BackendData":{"VtepMAC":"a6:e3:be:9b:f6:b9"}
咱們發現flannel.1和docker0是在同一網段的
#ping 88段的容器
[root@node01 cfg]# ping 172.17.88.1 PING 172.17.88.1 (172.17.88.1) 56(84) bytes of data. 64 bytes from 172.17.88.1: icmp_seq=1 ttl=64 time=0.581 ms 64 bytes from 172.17.88.1: icmp_seq=2 ttl=64 time=0.871 ms 64 bytes from 172.17.88.1: icmp_seq=3 ttl=64 time=6.78 ms 64 bytes from 172.17.88.1: icmp_seq=4 ttl=64 time=0.874 ms ^C --- 172.17.88.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3011ms rtt min/avg/max/mdev = 0.581/2.277/6.783/2.604 ms
一、建立TLS Bootstrapping Token
二、建立kubelet kubeconfig
三、建立kube-proxy kubeconfig
下載安裝包:https://dl.k8s.io/v1.11.7/kubernetes-server-linux-amd64.tar.gz
[root@master master_pkg]# tar -xf kubernetes-server-linux-amd64.tar.gz [root@master master_pkg]# mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin [root@master bin]# pwd /opt/kubernetes/bin [root@master bin]# chmod +x kubectl [root@master bin]# echo "PATH=$PATH:/opt/kubernetes/bin" >>/etc/profile [root@master bin]# source /etc/profile [root@master ssl]# pwd /root/ssl [root@master ssl]# cat kubeconfig.sh # 建立 TLS Bootstrapping Token export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #---------------------- # 建立kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://192.168.247.211:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig [root@master ssl]# sh kubeconfig.sh Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@master ssl]# cat token.csv dc434e4db0f27ac84703bacbb8157540,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@master ssl]# cp token.csv /opt/kubernetes/cfg/
master3個主件安裝腳本:
[root@master master_pkg]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=${1:-"192.168.1.195"} ETCD_SERVERS=${2:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --insecure-bind-address=127.0.0.1 \\ --bind-address=${MASTER_ADDRESS} \\ --insecure-port=8080 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.10.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver [root@master master_pkg]# cat controller-manager.sh #!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.10.10.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager [root@master master_pkg]# cat scheduler.sh #!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
apiserver配置文件
參數說明:
—logtostderr 啓用日誌
—-v 日誌等級
—etcd-servers etcd集羣地址
—bind-address 監聽地址
—secure-port https安全端口
—advertise-address 集羣通告地址
—allow-privileged 啓用受權
—service-cluster-ip-range Service虛擬IP地址段
—enable-admission-plugins 准入控制模塊
—authorization-mode 認證受權,啓用RBAC受權和節點自管理
—enable-bootstrap-token-auth 啓用TLS bootstrap功能,後面會講到
—token-auth-file token文件
—service-node-port-range Service Node類型默認分配端口範圍
部署master
[root@master ~]# cp ssl/ca*pem ssl/server*pem /opt/kubernetes/ssl/ [root@master master_pkg]# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh [root@master master_pkg]# ./apiserver.sh 192.168.247.211 https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@master master_pkg]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master master_pkg]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master master_pkg]# echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile [root@master master_pkg]# source /etc/profile
一、將master上的node配置文件拷貝到node的/opt/kubernetes/cfg/目錄下
[root@master ssl]# scp *kubeconfig 192.168.247.212:/opt/kubernetes/cfg/ [root@node01 ~]#tar -xf kubernetes-server-linux-amd64.tar.gz [root@node01 ~]# mv kubelet kube-proxy /opt/kubernetes/bin
二、node上2個組件的安裝腳本
[root@node01 ~]# cat kubelet.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.1.196"} DNS_SERVER_IP=${2:-"10.10.10.2"} cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=${NODE_ADDRESS} \\ --hostname-override=${NODE_ADDRESS} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --cert-dir=/opt/kubernetes/ssl \\ --allow-privileged=true \\ --cluster-dns=${DNS_SERVER_IP} \\ --cluster-domain=cluster.local \\ --fail-swap-on=false \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet [root@node01 ~]# cat proxy.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.1.200"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=${NODE_ADDRESS} \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
kubelet配置文件
參數說明:
—hostname-override 在集羣中顯示的主機名
—kubeconfig 指定kubeconfig文件位置,會自動生成
—bootstrap-kubeconfig 指定剛纔生成的bootstrap.kubeconfig文件
—cert-dir 頒發證書存放位置
—pod-infra-container-image 管理Pod網絡的鏡像
三、部署node
[root@node01 ~]# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh [root@node01 ~]# ./kubelet.sh 192.168.247.212 10.10.10.2 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node01 ~]# ./proxy.sh 192.168.247.212 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
四、在master上綁定kubelet-bootstrap
[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding "kubelet-bootstrap" created [root@node01 cfg]# systemctl start kubelet && systemctl enable kubelet [root@node01 cfg]# systemctl start kube-proxy && systemctl enable kube-proxy [root@master ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54 26s kubelet-bootstrap Pending [root@master ssl]# kubectl certificate approve node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54 certificatesigningrequest "node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54" approved [root@master ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54 1m kubelet-bootstrap Approved,Issued
# kubectl get node
# kubectl get componentstatus
集羣部署 – 啓動一個測試示例
# kubectl run nginx --image=nginx --replicas=3 # kubectl get pod # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort # kubectl get svc nginx Kubernetes容器集羣管理
Dashboard腳本:
[root@master k8s_yaml]# cat kubernetes-dashboard.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard [root@master k8s_yaml]# cat dashboard-admin.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
安裝dashboard,https://192.168.247.212:30001/#!訪問而後跳過認證便可!!
[root@master k8s_yaml]# kubectl apply -f kubernetes-dashboard.yaml [root@master k8s_yaml]# kubectl apply -f dashboard-admin.yaml
或者經過token訪問:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token
注意這裏有個坑,複製的時候格式會換行須要放到記事本里取消換行!!!
部署中的腳本下載地址:https://github.com/hejianlai/Docker-Kubernetes/tree/master/Kubernetes/install