kubernetes 1.7.0 + flannel 二進制部署 kubernetes 1.7.0 + flannel 基於 二進制 文件部署 本地化 kube-apiserver, kube-controller-manager , kube-scheduler (1).環境說明 k8s-master-1: 192.168.54.12 k8s-node1: 192.168.54.13 k8s-node2: 192.168.54.14 (2).初始化環境 hostnamectl --static set-hostname hostname 192.168.54.12 - k8s-master-1 192.168.54.13 - k8s-node1 192.168.54.14 - k8s-node2 #編輯 /etc/hosts 文件,配置hostname 通訊 vi /etc/hosts 192.168.54.12 k8s-master-1 192.168.54.13 k8s-node1 192.168.54.14 k8s-node2 建立 驗證 這裏使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 證書和祕鑰文件。 (1).安裝 cfssl mkdir -p /opt/local/cfssl cd /opt/local/cfssl wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 mv cfssl_linux-amd64 cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 mv cfssljson_linux-amd64 cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 cfssl-certinfo chmod +x * (2).建立 CA 證書配置 mkd.ir /opt/ssl cd /opt/ssl /opt/local/cfssl/cfssl print-defaults config > config.json /opt/local/cfssl/cfssl print-defaults csr > csr.json # config.json 文件 { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } # csr.json 文件 { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } (3).生成 CA 證書和私鑰 cd /opt/ssl/ /opt/local/cfssl/cfssl gencert -initca csr.json | /opt/local/cfssl/cfssljson -bare ca [root@k8s-master-1 ssl]# ls -lt 總用量 20 -rw-r--r-- 1 root root 1005 7月 3 17:26 ca.csr -rw------- 1 root root 1675 7月 3 17:26 ca-key.pem -rw-r--r-- 1 root root 1363 7月 3 17:26 ca.pem -rw-r--r-- 1 root root 210 7月 3 17:24 csr.json -rw-r--r-- 1 root root 292 7月 3 17:23 config.json (4).分發證書 # 建立證書目錄 mkdir -p /etc/kubernetes/ssl # 拷貝全部文件到目錄下 cp * /etc/kubernetes/ssl # 這裏要將文件拷貝到全部的k8s 機器上 scp * 192.168.54.13:/etc/kubernetes/ssl/ scp * 192.168.54.14:/etc/kubernetes/ssl/ etcd 集羣 etcd 是k8s集羣的基礎組件,這裏感受不必建立雙向認證。 (1).安裝 etcd yum -y install etcd3 (2).修改 etcd 配置 # etcd-1 # 修改配置文件,/etc/etcd/etcd.conf 須要修改以下參數: mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf-bak vi /etc/etcd/etcd.conf ETCD_NAME=etcd1 ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.54.12:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.54.12:2379,http://127.0.0.1:2379" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.12:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.12:2379" # etcd-2 # 修改配置文件,/etc/etcd/etcd.conf 須要修改以下參數: mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf-bak vi /etc/etcd/etcd.conf ETCD_NAME=etcd2 ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.54.13:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.54.13:2379,http://127.0.0.1:2379" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.13:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.13:2379" # etcd-3 # 修改配置文件,/etc/etcd/etcd.conf 須要修改以下參數: mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf-bak vi /etc/etcd/etcd.conf ETCD_NAME=etcd3 ETCD_DATA_DIR="/var/lib/etcd/etcd3.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.54.14:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.54.14:2379,http://127.0.0.1:2379" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.14:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.14:2379" 修改 etcd 啓動文件 /usr/lib/systemd/system/etcd.service sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service (3).啓動 etcd 分別啓動 全部節點的 etcd 服務 systemctl enable etcd systemctl start etcd systemctl status etcd (4).驗證 etcd 集羣狀態 查看 etcd 集羣狀態: etcdctl cluster-health # 出現 cluster is healthy 表示成功 查看 etcd 集羣成員: etcdctl member list member 4b622f1d4543c5f7 is healthy: got healthy result from http://192.168.54.13:2379 member 647542be2d7fdef3 is healthy: got healthy result from http://192.168.54.12:2379 member 83464a62a714c625 is healthy: got healthy result from http://192.168.54.14:2379 Flannel 網絡 (1).安裝 flannel 這邊其實因爲內網,就沒有使用SSL認證,直接使用了 yum -y install flannel 清除網絡中遺留的docker 網絡 (docker0, flannel0 等) ifconfig 若是存在 請刪除之,以避免發生沒必要要的未知錯誤 ip link delete docker0 .... (2).配置 flannel 設置 flannel 所用到的IP段 etcdctl --endpoint http://192.168.54.12:2379 set /flannel/network/config '{"Network":"10.233.0.0/16","SubnetLen":25,"Backend":{"Type":"vxlan","VNI":1}}' 接下來修改 flannel 配置文件 vim /etc/sysconfig/flanneld # 舊版本: FLANNEL_ETCD="http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379" # 修改成 集羣地址 FLANNEL_ETCD_KEY="/flannel/network/config" # 修改成 上面導入配置中的 /flannel/network FLANNEL_OPTIONS="--iface=em1" # 修改成 本機物理網卡的名稱 # 新版本: FLANNEL_ETCD="http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379" # 修改成 集羣地址 FLANNEL_ETCD_PREFIX="/flannel/network" # 修改成 上面導入配置中的 /flannel/network FLANNEL_OPTIONS="--iface=em1" # 修改成 本機物理網卡的名稱 (3).啓動 flannel systemctl enable flanneld systemctl start flanneld systemctl status flanneld 安裝 docker # 導入 yum 源 # 安裝 yum-config-manager yum -y install yum-utils # 導入 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # 更新 repo yum makecache # 安裝 yum install docker-ce (1).更改docker 配置 # 修改配置 vi /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS $DOCKER_DNS_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target # 修改其餘配置 cat >> /usr/lib/systemd/system/docker.service.d/docker-options.conf << EOF [Service] Environment="DOCKER_OPTS=--insecure-registry=10.254.0.0/16 --graph=/opt/docker --registry-mirror=http://b438f72b.m.daocloud.io" EOF # 從新讀取配置,啓動 docker systemctl daemon-reload systemctl start docker (3).查看docker網絡 ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.233.19.1 netmask 255.255.255.128 broadcast 0.0.0.0 ether 02:42:c1:2c:c5:be txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.54.12 netmask 255.255.255.0 broadcast 10.6.0.255 inet6 fe80::d6ae:52ff:fed1:f0c9 prefixlen 64 scopeid 0x20<link> ether d4:ae:52:d1:f0:c9 txqueuelen 1000 (Ethernet) RX packets 16286600 bytes 1741928233 (1.6 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15841272 bytes 1566357399 (1.4 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.233.19.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::d9:e2ff:fe46:9cdd prefixlen 64 scopeid 0x20<link> ether 02:d9:e2:46:9c:dd txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0 安裝 kubectl 工具 (1).Master 端 # 首先安裝 kubectl wget https://dl.k8s.io/v1.7.0/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz cp kubernetes/client/bin/* /usr/local/bin/ chmod a+x /usr/local/bin/kube* # 驗證安裝 kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} (2)建立 admin 證書 kubectl 與 kube-apiserver 的安全端口通訊,須要爲安全通訊提供 TLS 證書和祕鑰。 cd /opt/ssl/ vi admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } # 生成 admin 證書和私鑰 cd /opt/ssl/ /opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/config.json \ -profile=kubernetes admin-csr.json | /opt/local/cfssl/cfssljson -bare admin # 查看生成 [root@k8s-master-1 ssl]# ls admin* admin.csr admin-csr.json admin-key.pem admin.pem cp admin*.pem /etc/kubernetes/ssl/ (3).配置 kubectl kubeconfig 文件 # 配置 kubernetes 集羣 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.54.12:6443 # 配置 客戶端認證 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin kubectl config use-context kubernetes (4).分發 kubectl config 文件 # 將上面配置的 kubeconfig 文件分發到其餘機器 # 其餘服務器建立目錄 mkdir /root/.kube scp /root/.kube/config 192.168.54.13:/root/.kube/ scp /root/.kube/config 192.168.54.14:/root/.kube/ 部署 kubernetes Master 節點 Master 須要部署 kube-apiserver , kube-scheduler , kube-controller-manager 這三個組件。 (1).安裝 組件 # 從github 上下載版本 cd /tmp wget https://dl.k8s.io/v1.7.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/ (2).建立 kubernetes 證書 /opt/ssl vi kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.54.12", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } ## 這裏 hosts 字段中 三個 IP 分別爲 127.0.0.1 本機, 192.168.54.12 爲 Master 的IP, 10.254.0.1 爲 kubernetes SVC 的 IP, 通常是 部署網絡的第一個IP , 如: 10.254.0.1 , 在啓動完成後,咱們使用 kubectl get svc , 就能夠查看到 (3).生成 kubernetes 證書和私鑰 /opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/config.json \ -profile=kubernetes kubernetes-csr.json | /opt/local/cfssl/cfssljson -bare kubernetes # 查看生成 [root@k8s-master-1 ssl]# ls -lt kubernetes* -rw-r--r-- 1 root root 1245 7月 4 11:25 kubernetes.csr -rw------- 1 root root 1679 7月 4 11:25 kubernetes-key.pem -rw-r--r-- 1 root root 1619 7月 4 11:25 kubernetes.pem -rw-r--r-- 1 root root 436 7月 4 11:23 kubernetes-csr.json # 拷貝到目錄 cp -r kubernetes* /etc/kubernetes/ssl/ (4).配置 kube-apiserver kubelet 首次啓動時向 kube-apiserver 發送 TLS Bootstrapping 請求,kube-apiserver 驗證 kubelet 請求中的 token 是否與它配置的 token 一致,若是一致則自動爲 kubelet生成證書和祕鑰。 # 生成 token [root@k8s-master-1 ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 11849e4f70904706ab3e631e70e6af0d # 建立 token.csv 文件 /opt/ssl vi token.csv 11849e4f70904706ab3e631e70e6af0d,kubelet-bootstrap,10001,"system:kubelet-bootstrap" # 拷貝 cp token.csv /etc/kubernetes/ (4).建立 kube-apiserver.service 文件 1、 開啓了 RBAC # 自定義 系統 service 文件通常存於 /etc/systemd/system/ 下 vi /etc/systemd/system/kube-apiserver.service [Unit] Description=kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] User=root ExecStart=/usr/local/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --advertise-address=192.168.54.12 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/lib/audit.log \ --authorization-mode=RBAC \ --bind-address=192.168.54.12 \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --enable-swagger-ui=true \ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379 \ --event-ttl=1h \ --kubelet-https=true \ --insecure-bind-address=192.168.54.12 \ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-cluster-ip-range=10.254.0.0/16 \ --service-node-port-range=30000-60000 \ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --experimental-bootstrap-token-auth \ --token-auth-file=/etc/kubernetes/token.csv \ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target 2、 關閉了 RBAC # 自定義 系統 service 文件通常存於 /etc/systemd/system/ 下 vi /etc/systemd/system/kube-apiserver.service [Unit] Description=kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] User=root ExecStart=/usr/local/bin/kube-apiserver \ --storage-backend=etcd2 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --advertise-address=192.168.54.12 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/lib/audit.log \ --bind-address=192.168.54.12 \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --enable-swagger-ui=true \ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379 \ --event-ttl=1h \ --kubelet-https=true \ --insecure-bind-address=192.168.54.12 \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-cluster-ip-range=10.254.0.0/16 \ --service-node-port-range=30000-32000 \ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --experimental-bootstrap-token-auth \ --token-auth-file=/etc/kubernetes/token.csv \ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target # 這裏面要注意的是 --service-node-port-range=30000-32000 # 這個地方是 映射外部端口時 的端口範圍,隨機映射也在這個範圍內映射,指定映射端口必須也在這個範圍內。 (5).啓動 kube-apiserver systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver --storage-backend=etcd2 (6).配置 kube-controller-manager # 建立 kube-controller-manager.service 文件 vi /etc/systemd/system/kube-controller-manager.service [Unit] Description=kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://192.168.54.12:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-cidr=10.233.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target (7).啓動 kube-controller-manager systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager (8).配置 kube-scheduler # 建立 kube-cheduler.service 文件 vi /etc/systemd/system/kube-scheduler.service [Unit] Description=kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=127.0.0.1 \ --master=http://192.168.54.12:8080 \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target (9).啓動 kube-scheduler systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler (10).驗證 Master 節點 [root@k8s-master-1 opt]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} 部署 kubernetes Node 節點 (首先部署 192.168.54.13) Node 節點 須要部署的組件有 docker flannel kubectl kubelet kube-proxy 這幾個組件。 (1).配置 kubectl wget https://dl.k8s.io/v1.7.0/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz cp kubernetes/client/bin/* /usr/local/bin/ chmod a+x /usr/local/bin/kube* # 驗證安裝 kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} (2).配置 kubelet kubelet 啓動時向 kube-apiserver 發送 TLS bootstrapping 請求,須要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper 角色,而後 kubelet 纔有權限建立認證請求(certificatesigningrequests)。 # 先建立認證請求 # user 爲 master 中 token.csv 文件裏配置的用戶 # 只需在一個node中建立一次就能夠 kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap (3).下載 二進制文件 cd /tmp wget https://dl.k8s.io/v1.7.0/kubernetes-server-linux-amd64.tar.gz tar zxvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/ (4).建立 kubelet kubeconfig 文件 # 配置集羣 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.54.12:6443 \ --kubeconfig=bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=11849e4f70904706ab3e631e70e6af0d \ --kubeconfig=bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 配置默認關聯 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 拷貝生成的 bootstrap.kubeconfig 文件 mv bootstrap.kubeconfig /etc/kubernetes/ (5).建立 kubelet.service 文件 # 建立 kubelet 目錄 mkdir /var/lib/kubelet vi /etc/systemd/system/kubelet.service [Unit] Description=kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --address=192.168.54.13 \ --hostname-override=192.168.54.13 \ --pod-infra-container-image=jicki/pause-amd64:3.0 \ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --require-kubeconfig \ --cert-dir=/etc/kubernetes/ssl \ --cluster_dns=10.254.0.2 \ --cluster_domain=cluster.local. \ --hairpin-mode promiscuous-bridge \ --allow-privileged=true \ --serialize-image-pulls=false \ --logtostderr=true \ --v=2 ExecStopPost=/sbin/iptables -A INPUT -s 10.0.0.0/8 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -s 172.16.0.0/12 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -s 192.168.0.0/16 -p tcp --dport 4194 -j ACCEPT ExecStopPost=/sbin/iptables -A INPUT -p tcp --dport 4194 -j DROP Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target # 如上配置: 192.168.54.13 爲本機的IP 10.254.0.2 預分配的 dns 地址 cluster.local. 爲 kubernetes 集羣的 domain jicki/pause-amd64:3.0 這個是 pod 的基礎鏡像,既 gcr 的 gcr.io/google_containers/pause-amd64:3.0 鏡像, 下載下來修改成本身的倉庫中的比較快。 (6).啓動 kubelet systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet 1.若是服務啓動日誌出現以下;(報錯1) 18000 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" 18000 kubelet.go:517] Hairpin mode set to "hairpin-veth" 18000 docker_service.go:207] Docker cri networking managed by kubernetes.io/no-op k8s-new-cluster-node2 kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" 新增配置參數; cat /etc/systemd/system/kubelet.service --cgroup-driver=systemd\ 1.若是服務啓動日誌出現以下;(報錯2) ze:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 30 17:54:09 k8s-new-cluster-test-host01 kubelet: I1030 17:54:09.775906 22962 manager.go:222] Version: {KernelVersion:4.18.15-1.el7. elrepo.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:18.06.1-ce DockerAPIVersion:1.38 CadvisorVersion: CadvisorRevision:} Oct 30 17:54:09 k8s-new-cluster-test-host01 kubelet: I1030 17:54:09.777164 22962 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / Oct 30 17:54:09 k8s-new-cluster-test-host01 kubelet: error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/sda4 partition#0118388604#0110#011-2] Oct 30 17:54:09 k8s-new-cluster-test-host01 systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE Oct 30 17:54:09 k8s-new-cluster-test-host01 systemd: Unit kubelet.service entered failed state. Oct 30 17:54:09 k8s-new-cluster-test-host01 systemd: kubelet.service failed. [root@k8s-new-cluste]#swapoff -a [root@k8s-new-cluste]cat /etc/systemd/system/kubelet.service --fail-swap-on=false\ (7).配置 TLS 認證 # 查看 csr 的名稱 [root@k8s-node1 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-EUE41uO5bofZZ-7GKD_V31oHXsENKFXCkLPy6Dj35Sc 1m kubelet-bootstrap Pending # 增長 認證 kubectl certificate approve node-csr-EUE41uO5bofZZ-7GKD_V31oHXsENKFXCkLPy6Dj35Sc # 提示 certificatesigningrequest "node-csr-EUE41uO5bofZZ-7GKD_V31oHXsENKFXCkLPy6Dj35Sc" approved (8).驗證 nodes [root@k8s-master-1 ~]# kubectl get nodes NAME STATUS AGE VERSION 192.168.54.13 Ready 33s v1.7.0 # 成功之後會自動生成配置文件與密鑰 # 配置文件 ls /etc/kubernetes/kubelet.kubeconfig /etc/kubernetes/kubelet.kubeconfig # 密鑰文件 ls /etc/kubernetes/ssl/kubelet* /etc/kubernetes/ssl/kubelet-client.crt /etc/kubernetes/ssl/kubelet.crt /etc/kubernetes/ssl/kubelet-client.key /etc/kubernetes/ssl/kubelet.key (9).配置 kube-proxy (10).建立 kube-proxy 證書 # 證書方面因爲咱們node端沒有裝 cfssl # 咱們回到 master 端 機器 去配置證書,而後拷貝過來 [root@k8s-master-1 ~]# cd /opt/ssl vi kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } (11).生成 kube-proxy 證書和私鑰 /opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/config.json \ -profile=kubernetes kube-proxy-csr.json | /opt/local/cfssl/cfssljson -bare kube-proxy # 查看生成 ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem # 拷貝到目錄 cp kube-proxy*.pem /etc/kubernetes/ssl/ (12).拷貝到Node節點 scp kube-proxy*.pem 192.168.54.13:/etc/kubernetes/ssl/ scp kube-proxy*.pem 192.168.54.14:/etc/kubernetes/ssl/ (13).建立 kube-proxy kubeconfig 文件 # 配置集羣 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.54.12:6443 \ --kubeconfig=kube-proxy.kubeconfig # 配置客戶端認證 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 配置默認關聯 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig # 拷貝到目錄 mv kube-proxy.kubeconfig /etc/kubernetes/ (14).建立 kube-proxy.service 文件 # 建立 kube-proxy 目錄 mkdir -p /var/lib/kube-proxy vi /etc/systemd/system/kube-proxy.service [Unit] Description=kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --bind-address=192.168.54.13 \ --hostname-override=192.168.54.13 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \ --logtostderr=true \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target (150).啓動 kube-proxy systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy (16)部署其餘Node 節點 (第二個節點部署 192.168.54.14) .........省略了....... 參照以上配置 (17).測試集羣 # 建立一個 nginx deplyment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-dm spec: replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx:alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: name: nginx [root@k8s-master-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dm-2214564181-lxff5 1/1 Running 0 14m nginx-dm-2214564181-qm1bp 1/1 Running 0 14m [root@k8s-master-1 ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-dm 2 2 2 2 14m [root@k8s-master-1 ~]# kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 4h nginx-svc 10.254.129.54 <none> 80/TCP 15m # 在 node 裏 curl [root@k8s-node2 ~]# curl 10.254.129.54 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 配置 KubeDNS 官方 github yaml 相關 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns (1).下載鏡像 # 官方鏡像(國內牆的緣由致使下載失敗) gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 # 使用此地址pull;(選擇其一) docker pull hongchhe/k8s-dns-sidecar-amd64:1.14.4 docker pull hongchhe/k8s-dns-kube-dns-amd64:1.14.4 docker pull hongchhe/k8s-dns-dnsmasq-nanny-amd64:1.14.4 docker pull jicki/k8s-dns-sidecar-amd64:1.14.4 docker pull jicki/k8s-dns-kube-dns-amd64:1.14.4 docker pull jicki/k8s-dns-dnsmasq-nanny-amd64:1.14.4 (2).下載 yaml 文件 curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-cm.yaml curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-sa.yaml curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-controller.yaml.base curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-svc.yaml.base # 修改後綴 mv kubedns-controller.yaml.base kubedns-controller.yaml mv kubedns-svc.yaml.base kubedns-svc.yaml (3).系統預約義的 RoleBinding 預約義的 RoleBinding system:kube-dns 將 kube-system 命名空間的 kube-dns ServiceAccount 與 system:kube-dns Role 綁定, 該 Role 具備訪問 kube-apiserver DNS 相關 API 的權限; [root@k8s-master-1 kubedns]# kubectl get clusterrolebindings system:kube-dns -o yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2017-07-04T04:15:13Z labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-dns resourceVersion: "106" selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings/system%3Akube-dns uid: 60c1e0e1-606f-11e7-b212-d4ae52d1f0c9 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-dns subjects: - kind: ServiceAccount name: kube-dns namespace: kube-system (4).修改 kubedns-svc.yaml # kubedns-svc.yaml 中 clusterIP: __PILLAR__DNS__SERVER__ 修改成咱們以前定義的 dns IP 10.254.0.2 cat kubedns-svc.yaml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.254.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP (5).修改 kubedns-controller.yaml 1. # 修改 --domain=__PILLAR__DNS__DOMAIN__. 爲 咱們以前 預約的 domain 名稱 --domain=cluster.local. 2. # 修改 --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053 中 domain 爲咱們以前預約的 --server=/cluster.local./127.0.0.1#10053 3. # 修改 --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__, 中的 domain 爲咱們以前預約的 --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local., 4. # 修改 --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__, 中的 domain 爲咱們以前預約的 --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local., (6).導入 yaml 文件 [root@k8s-master-1 kubedns]# kubectl create -f . configmap "kube-dns" created deployment "kube-dns" created serviceaccount "kube-dns" created service "kube-dns" created (7).查看 kubedns 服務 [root@k8s-master-1 kubedns]# kubectl get all --namespace=kube-system NAME READY STATUS RESTARTS AGE po/kube-dns-1511229508-llfgs 3/3 Running 0 1m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kube-dns 10.254.0.2 <none> 53/UDP,53/TCP 1m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 1m NAME DESIRED CURRENT READY AGE rs/kube-dns-1511229508 1 1 1 1m (8).驗證 dns 服務 # 導入以前的 nginx-dm yaml文件 [root@k8s-master-1 ~]# kubectl get svc nginx-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc 10.254.79.137 <none> 80/TCP 29s # 建立一個 pods 來測試一下 nameserver apiVersion: v1 kind: Pod metadata: name: alpine spec: containers: - name: alpine image: alpine command: - sh - -c - while true; do sleep 1; done # 查看 pods [root@k8s-master-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE alpine 1/1 Running 0 1m nginx-dm-2214564181-4zjbh 1/1 Running 0 5m nginx-dm-2214564181-tpz8t 1/1 Running 0 5m # 測試 [root@k8s-master-1 ~]# kubectl exec -it alpine ping nginx-svc PING nginx-svc (10.254.207.143): 56 data bytes [root@k8s-master-1 ~]# kubectl exec -it alpine nslookup nginx-svc nslookup: can't resolve '(null)': Name does not resolve Name: nginx-svc Address 1: 10.254.207.143 nginx-svc.default.svc.cluster.local 部署 Ingress 與 Dashboard (1).部署 dashboard 官方 dashboard 的github https://github.com/kubernetes/dashboard 這裏注意,如下部署的應用爲 api-service 關閉了 RBAC 的, 在開啓了 RBAC 的狀況下,不管是 dashboard 與 nginx ingress 都須要修改,默認是有問題的。 (2).下載 dashboard 鏡像 # 官方鏡像 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 # 國內鏡像 jicki/kubernetes-dashboard-amd64:v1.6.1 #內網鏡像: reg.chehejia.com/k8s-web-ui:v1.6.1 (3).下載 yaml 文件 curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-controller.yaml curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-service.yaml (4).導入 yaml [root@k8s-master-1 dashboard]# kubectl apply -f . deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created # 查看 svc 與 pod [root@k8s-master-1 dashboard]# kubectl get svc -n kube-system kubernetes-dashboard NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.254.167.28 <none> 80/TCP 31s (5).部署 Nginx Ingress kubernetes 暴露服務的方式目前只有三種:LoadBlancer Service、NodePort Service、Ingress; 什麼是 Ingress ? Ingress 就是利用 Nginx Haproxy 等負載均衡工具來暴露 kubernetes 服務。 官方 Nginx Ingress github https://github.com/kubernetes/ingress/tree/master/examples/deployment/nginx # 下載鏡像 # 官方鏡像 gcr.io/google_containers/defaultbackend:1.0 gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.10 # 國內鏡像 jicki/defaultbackend:1.0 jicki/nginx-ingress-controller:0.9.0-beta.10 #內網鏡像; reg.chehejia.com/ingress-defaultbackend:v1 reg.chehejia.com/nginx-ingress-controller:v0.9-beta.10 # 部署 Nginx backend , Nginx backend 用於統一轉發 沒有的域名 到指定頁面。 curl -O https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/default-backend.yaml # 直接導入既可, 這裏不須要修改 [root@k8s-master-1 ingress]# kubectl apply -f default-backend.yml deployment "default-http-backend" created service "default-http-backend" created # 查看服務 [root@k8s-master-1 ingress]# kubectl get deployment -n kube-system default-http-backend NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 36s # 部署 Ingress Controller 組件 # 下載 yaml 文件 curl -O https://raw.githubusercontent.com/kubernetes/ingress/master/examples/daemonset/nginx/nginx-ingress-daemonset.yaml # 導入 yaml 文件 [root@k8s-master-1 ingress]# kubectl apply -f nginx-ingress-daemonset.yaml daemonset "nginx-ingress-lb" created # 查看服務 [root@k8s-master-1 ingress]# kubectl get daemonset -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE nginx-ingress-lb 2 2 2 2 2 <none> 11s # 建立一個 ingress # 查看咱們原有的 svc [root@k8s-master-1 Ingress]# kubectl get svc nginx-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc 10.254.207.143 <none> 80/TCP 1d # 建立 yaml 文件 vi nginx-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: nginx.jicki.me http: paths: - backend: serviceName: nginx-svc servicePort: 80 # 導入 yaml [root@k8s-master-1 ingress]# kubectl apply -f nginx-ingress.yaml ingress "nginx-ingress" created # 查看 ingress [root@k8s-master-1 Ingress]# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress nginx.jicki.me 192.168.54.13,10... 80 24s # 測試訪問 [root@k8s-master-1 ingress]# curl -I nginx.jicki.me HTTP/1.1 200 OK Server: nginx/1.13.2 Date: Thu, 06 Jul 2017 04:21:43 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Last-Modified: Wed, 28 Jun 2017 18:27:36 GMT ETag: "5953f518-264" Accept-Ranges: bytes # 配置一個 Dashboard Ingress # 查看 dashboard 的 svc [root@k8s-master-1 ingress]# kubectl get svc -n kube-system kubernetes-dashboard NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.254.81.94 <none> 80/TCP 2h # 編輯一個 yaml 文件 vi dashboard-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dashboard-ingress namespace: kube-system spec: rules: - host: dashboard.jicki.me http: paths: - backend: serviceName: kubernetes-dashboard servicePort: 80 # 查看 ingress [root@k8s-master-1 dashboard]# kubectl get ingress -n kube-system NAME HOSTS ADDRESS PORTS AGE dashboard-ingress dashboard.jicki.me 192.168.54.13,10... 80 1m # 測試訪問 [root@k8s-master-1 dashboard]# curl -I dashboard.jicki.me HTTP/1.1 200 OK Server: nginx/1.13.2 Date: Thu, 06 Jul 2017 06:32:00 GMT Content-Type: text/html; charset=utf-8 Content-Length: 848 Connection: keep-alive Accept-Ranges: bytes Cache-Control: no-store Last-Modified: Tue, 16 May 2017 12:53:01 GMT