Kubernetes 的概念與安裝配置補充

Kubernetes 安裝與配置補充

標籤(空格分隔): kubernetes系列node


  • 一:kubernetes的介紹
  • 二:kubernetes的安裝配置
  • 三:kubernetes 的 WEB UI

一:kubernetes的介紹

1.一、Kubernetes是什麼

Kubernetes是Google在2014年開源的一個容器集羣管理系統,Kubernetes簡稱K8S。 

K8S用於容器化應用程序的部署,擴展和管理。 

K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。

Kubernetes目標是讓部署容器化應用簡單高效。

官方網站:http://www.kubernetes.io

1.二、Kubernetes特性

一、自我修復
在節點故障時從新啓動失敗的容器,替換和從新部署,保證預期的副本數量;殺死健康檢查失敗的容器,而且在未準備好以前不會處理客戶端請求,確保線上服務不中斷。

二、 彈性伸縮
使用命令、UI或者基於CPU使用狀況自動快速擴容和縮容應用程序實例,保證應用業務高峯併發時的高可用性;業務低峯時回收資源,以最小成本運行服務。  自動部署和回滾
K8S採用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除全部Pod,若是更新過程當中出現問題,將回滾更改,確保升級不受影響業務。 

三、服務發現和負載均衡
K8S爲多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),而且負載均衡關聯的全部容器,使得用戶無需考慮容器IP問題。

四、 機密和配置管理
管理機密數據和應用程序配置,而不須要把敏感數據暴露在鏡像裏,提升敏感數據安全性。並能夠將一些經常使用的配置存儲在K8S中,方便應用程序使用。

五、存儲編排
掛載外部存儲系統,不管是來自本地存儲,公有云(如AWS),仍是網絡存儲(如NFS、GlusterFS、Ceph)都做爲集羣資源的一部分使用,極大提升存儲使用靈活性。 
六、 批處理

提供一次性任務,定時任務;知足批量數據處理和分析的場景。

1.3 Kubernetes集羣架構與組件

1.png-255.3kB

image_1dimpj9kn155d15bh12v2ll731ul.png-153.9kB

Master組件

kube-apiserverKubernetes API,
集羣的統一入口,各組件協調者,以RESTful API提供接口服務,全部對象資源的增刪改查和監聽操做都交給APIServer處理後再提交給Etcd存儲。

 kube-controller-manager處理集羣中常規後臺任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。

  kube-scheduler根據調度算法爲新建立的Pod選擇一個Node節點,能夠任意部署,能夠部署在同一個節點上,也能夠部署在不一樣的節點上。

  etcd分佈式鍵值存儲系統。用於保存集羣狀態數據,好比Pod、Service等對象信息。

  Node組件 

  kubelet kubelet是Master在Node節點上的Agent,管理本機運行容器的生命週期,好比建立容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工做。kubelet將每一個Pod轉換成一組容器。

 kube-proxy
在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工做。

 docker或rocket容器引擎,運行容器。

image_1dimq981i10ku1c5l1lh21hd61kla12.png-905.9kB

image_1dimr3jkr1j3q14lu102i1vg81g9a1f.png-188.6kB

1. Pod• 

最小部署單元
• 一組容器的集合
• 一個Pod中的容器共享網絡命名空間
• Pod是短暫的

2. Controllers

• ReplicaSet : 確保預期的Pod副本數量
• Deployment : 無狀態應用部署
• StatefulSet : 有狀態應用部署
• DaemonSet : 確保全部Node運行同一個Pod
• Job : 一次性任務
• Cronjob : 定時任務更高級層次對象,部署和管理Pod

3. Service

• 防止Pod失聯
• 定義一組Pod的訪問策略

4. Label : 標籤,附加到某個資源上,用於關聯對象、查詢和篩選

5. Namespaces : 命名空間,將對象邏輯上隔離

6. Annotations :註釋

二: kubernetes 高可用集羣環境部署

2.1 官方提供的三種部署方式

1. minikube
Minikube是一個工具,能夠在本地快速運行一個單點的Kubernetes,僅用於嘗試Kubernetes或平常開發的用戶使用。部署地址:https://kubernetes.io/docs/setup/minikube/

2. kubeadm
Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集羣。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3.二進制包推薦,從官方下載發行版的二進制包,手動部署每一個組件,組成Kubernetes集羣。下載地址:https://github.com/kubernetes/kubernetes/releases

2.2 採用二進制包部署

2.2.1 軟件與版本

image_1dimt7dur18vr10pv1j1e1eidkq11s.png-23.8kB

2.2.2 IP地址 與角色規劃

image_1dimtqhsirno1f7hd0lko11vm129.png-146.2kB

2.2.3 單節點master

image_1dimu8jak1h8t278ufb19rc1o2v36.png-207.1kB

2.2.4 多節點master

image_1dimuauhj17nuc0n7t0ovr184c3j.png-259.5kB

2.3 首先部署單master節點

2.3.1 自籤SSL證書

image_1dimuf66hlcoo2l40kc1ish40.png-106.3kB

mkdir /k8s/{k8s-cert,etcd-cert} -p

cd /Deploy

cp -p etcd-cert.sh /k8s/etcd-cert
cd /k8s/etcd-cert

chmod +x etcd-cert.sh
----
etcd-cert.sh 腳本內容

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.100.11",
    "192.168.100.12",
    "192.168.100.13",
    "192.168.100.14",
    "192.168.100.15",
    "192.168.100.16",
    "192.168.100.17",
    "192.168.100.18",
    "192.168.100.60"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

----

image_1dimvlfmv11fh10gjeo3k8j6365d.png-460kB

cfsssl 命令支持

cd /root/Deploy

cp -p cfssl.sh /k8s/etcd-cert

cd /k8s/etcd-cert

chmod +x cfssl.sh
./cfssl.sh
./etcd-cert.sh
----
cfssl.sh 腳本內容

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
----

image_1din02ksu8u31e5efo8u1v1tj85q.png-457.5kB

image_1din0539jnpt1qmm1j4k1pl1lrm67.png-213.4kB

image_1din0o9681nea1uoc1akq1bpsa9571.png-626.9kB

2.3.2 配置etcd 服務

二進制包下載地址
https://github.com/etcd-io/etcd/releases
cd /root/Soft 

tar -zxvf tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz

cd cd etcd-v3.3.10-linux-amd64/

mkdir -p /opt/etcd/{ssl,bin,cfg} 

mv etcd etcdctl /opt/etcd/bin/

cd /k8s/etcd-cert

cp -p *.pem /opt/etcd/ssl
cp -p *.csr /opt/etcd/ssl

image_1din13lctdevd5b1e5pij91off7u.png-169kB

image_1din1449l1gb6lem1ocmk3v13l18b.png-427.5kB

image_1din1bejefhkp5911tr140a30h98.png-410.5kB

image_1din20esf1pm71mkd1dgj1ai916f29l.png-107.9kB

cd /root/Deploy

cp -p etcd.sh /root 

chmod +x etcd.sh 

./etcd.sh etcd01 192.168.100.11 etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380

scp -r /opt/etcd 192.168.100.13:/opt/

scp -r /opt/etcd 192.168.100.14:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.100.13:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/etcd.service root@192.168.100.14:/usr/lib/systemd/system/

----
etcd.sh 腳本內容

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.100.11 etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
----

image_1din2gsk418l1r2f15otg5g59jai.png-353.4kB

image_1din2hhja1opvivl9hp5r4rrsav.png-544.7kB

login : 
192.168.100.11 etcd 的文件
vim /opt/etcd/cfg/etcd
---
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.11:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
~                                
---
login :
192.168.100.13 etcd 文件內容
vim /opt/etcd/cfg/etcd
---
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.13:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.13:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
----
login :
192.168.100.14 etcd 文件內容
vim /opt/etcd/cfg/etcd
----

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.14:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.14:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.14:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.14:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
----
啓動 etcd 服務
service etcd start 
chkconfig etcd on

image_1din2v62eues14046pu1k221li7bc.png-347.3kB

image_1din30i9mn7afcf19o2nub1m9bcp.png-315kB

image_1din312rf1v2d1hab13d2192g1pu4d6.png-266kB

驗證 etcd集羣 

cd /opt/etcd/ssl

/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" \
cluster-health

image_1din38k4k1nn811jc1jh81s4os3keg.png-223.2kB

image_1din39gromb71r0j9g1osi1urset.png-242.5kB

image_1din3743d1ecm4s41be66n01v0edj.png-316.6kB

2.4 Node安裝Docker

image_1din3g00bicf1e00ql910tasmhfa.png-198.8kB

在 node節點 安裝docker 
192.168.100.13 192.168.100.14 

----
yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

docker 加速器
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://abcd1234.m.daocloud.io

# yum install docker-ce
# systemctl start docker
# systemctl enable docker

---

image_1din4jab5611t4k1n589lg1v2kgh.png-483.5kB

image_1din4k37s17gqr14unmvf117ggu.png-208.8kB

image_1din4khsko7912jqfc3pj28bmhb.png-348.7kB

image_1din4l0n91j547r91cuo19f1h4lho.png-340.6kB

image_1din4e954chg1cmk1n7661oqpfn.png-945.9kB

image_1din4f98v1oks11f47ig2pm17cbg4.png-914.8kB

2.5 部署flannel 網絡 模型

2.5.1 kubernetes 網絡模型(CNI)

Container Network Interface(CNI): 容器網絡接口,Google 和 cCoreOS 主導

Kubernetes 網絡模型設計要求:
 1. 一個pod 一個IP
 2. 每一個pod 獨立IP,pod 內全部容器共享網絡(同一個IP)
 3. 全部容器均可以與全部其餘容器通訊
 4. 全部節點均可以與全部容器通訊

image_1din5hn8v1lvk1qjm114bdhq14ugi5.png-418.9kB

image_1din7apbid9i1gg549mnklpm1j5.png-538kB

2.5.2 flannel的 網絡模型

Overlay Network:覆蓋網絡,在基礎網絡上疊加的一種虛擬網絡技術模式,該網絡中的主機經過虛擬鏈路鏈接起來。
VXLAN:將源數據包封裝到UDP中,並使用基礎網絡的IP/MAC做爲外層報文頭進行封裝,而後在以太網上傳輸,到達目的地後由隧道端點解封裝並將數據發送給目
標地址。
Flannel:是Overlay網絡的一種,也是將源數據包封裝在另外一種網絡包裏面進行路由轉發和通訊,目前已經支持UDP、VXLAN、AWS VPC和GCE路由等數據轉發方
式。

image_1din7ojqr1po12ji1sb51gv9r7kji.png-94.6kB

image_1dip4je05rg313va1s6f107mdj219.png-303.4kB

###2.5.3 flannel 安裝 與配置linux

1. 寫入分配的子網段到etcd,供flanneld使用

cd /opt/etcd/ssl/

/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

image_1dipbmq451peav151d8ta8b1lkd26.png-199.5kB

2. 下載flannel 軟件
    https://github.com/coreos/flannel/releases

    tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz

     mv flanneld /opt/kubernetes/bin/ 

     mv mk-docker-opts.sh /opt/kubernetes/bin/

image_1dipcv3p8hksg60pfd1at6gau33.png-345.9kB

在node節點 上 部署flannel 

mkdir /opt/kubernetes/{bin,cfg,ssl} -p 

cd /root/

./flannel.sh https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379

image_1dipd66vv1pbd1kv8ps1glmva53.png-206.2kB

啓動 flannel 與 docker 

service flanneld restart

service docker restart

image_1dipdk82177s15b51lqu1bfbe1q5g.png-171.9kB

image_1dipdl89i13ch1t0tfumd8t1kg65t.png-765.6kB

image_1dipeu8su1gkuch81clq1v151rbu8a.png-244.1kB

2.png-206.5kB

flannel 網絡的測試

在192.168.100.13 與 192.168.10.14 上面 建立 臨時容器 查看 是否是能夠ping 通

安裝測試容器
docker run ti bubsybox 

docker run ti busybox

image_1dipg02vv1stdnqh19261l2ct26b3.png-326.7kB

3.png-456kB

image_1dipg5mkogtl84l1k56mln113ncb.png-291.7kB

image_1dipg6sns1bm1kqi109adhfhdqco.png-185.4kB


查看路由
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" ls /coreos.com/network/

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" ls /coreos.com/network/subnets

image_1dipgr40o1259mpd1pqs1s8a1l2vd5.png-145.4kB

image_1dipgrlmu1u2e12f6q94ar8cstdi.png-189.5kB

2.6 部署k8s 的 Master 部分

2.6.1 下載kubernetes

下載地址:
選擇 1.13.4 版本
https://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gz

2.6.2 部署kube-apiserver

tar -zxvf kubernetes-server-linux-amd64.tar.gz

mkdir -p /opt/kubernetes/{bin,cfg,ssl}
cd /root/kubernetes/server/bin

cp -p kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/

cd /root/Deploy 
chmod +x apiserver.sh 

 ./apiserver.sh 192.168.100.11 https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379

image_1diphnpahvf01jeo1quu133i32den.png-371.8kB

image_1diphoan276fopq11mpokae35fa.png-470.4kB

image_1dipiacdp1tog1r9jq2f1st262efn.png-162.9kB

image_1dipihpa911hbubn1o0aq141l8qgk.png-154.1kB

配置k8s 的認證

cd /root/Deploy 

cp -p k8s-cert.sh /opt/kubernetes/ssl

cd /opt/kubernetes/ssl

chmod +x k8s-cert.sh 

./k8s-cert.sh

image_1dipiv1bia451kn513fok831i2ph1.png-324kB

生成token 文件

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

cat token.csv

mv tokcen.csv /opt/kubernetes/cfg/

image_1dipjbqao1o0vgrq160kacm32bie.png-398.5kB

image_1dipjfkan6l34qa1390q8d7mnjb.png-417.6kB

重啓動 kube-apiserver 

service kube-apiserver restart 

netstat -nultp |grep 8080
netstat -nultp |grep 6443 

apiserver 問題排查命令

/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

image_1dipjqno31m1e1jhj1hr21tq33giko.png-580.1kB

2.6.3 部署controller-manager 與 scheduler

cd /root/Deploy
chmod +x controller-manager.sh 
chmod +x scheduler.sh 

./conroller-manager.sh 127.0.0.1
./scheduler.sh 127.0.0.1

image_1dipk7m5d19at1k681mer1vfnb3vl5.png-468kB

image_1dipk971b1hcj124s1lug1r3ufali.png-372.2kB

檢查集羣 的狀態

cd /root/kubernetes/server/bin/
cp -p kubectl /usr/bin/
kubectl get cs

image_1dipkg7fo1shjkii3eu1g0ah0slv.png-337.3kB

部署Node組件 所須要的受權

image_1dipkjeht1t9lbvp1tjj1ao9vgpmc.png-162.7kB

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

image_1dipkmbcflckj8h1mmot3ham3nc.png-138.1kB

cd /root/Deploy
cp -p kubeconfig.sh /root 
cd /root
chmod +x kubeconfig.sh 
./kubeconfig.sh 192.168.100.11 /opt/kubernetes/ssl

生成bootstrap.kubeconfig 與 kube-proxy.kubeconfig 文件

image_1dipl8u491c1dmhe111rkkm1qq2o9.png-369.9kB

image_1diplaasn1a39h64kkc6c1kh3p6.png-429.4kB


查看證書頒發
kubectl get csr

image_1dipmq1663k61ufh15fq4hf1krku3.png-183.8kB

kubectl certificate approve node-csr-H38P-yvXaCa5GO7nXNg_2zegNT1BuSr-wCBzBXOPXBc

kubectl certificate approve node-csr-kAJTeC6Biz8ZtNsbFCSoL8AF-DhAFBlocn8xDxzTr1s

kubectl get csr 

kubectl get node

image_1dipn07cu3bf88mfp613rd1hl3ug.png-659.5kB

image_1dipn1jrlb4eb501u8115p2108tut.png-141.4kB

2.7 部署k8s的node部分組件

複製 bootstrap.kubeconfig 與 kube-proxy.kubeconfig 文件到node 上面

scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.100.13:/opt/kubernetes/cfg/

scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.100.14:/opt/kubernetes/cfg/

cp -p bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/

cd /root/kubernetes/server/bin

scp kubelet root@192.168.100.13:/opt/kubernetes/bin/

scp kubelet root@192.168.100.14:/opt/kubernetes/bin/

image_1dipllvvf1ou81q7l172u93quevq3.png-305.8kB

image_1dipm2615nq1fjji1rntvsfhqg.png-385.4kB

node 節點 執行
cd /root 
chmod +x kubelet.sh 
./kubelet 192.168.100.13
./kubelet 192.168.100.14

image_1dipm57qcmi28fu9vpj2n1q6er0.png-201.6kB

image_1dipm7h3318g49fe1kq41c0k1895rp.png-561.1kB

4.png-386.9kB

kube-proxy 部署

scp kube-proxy 192.168.100.13:/opt/kubernetes/bin/
scp kube-proxy 192.168.100.14:/opt/kubernetes/bin/

image_1dipmh0mo1bo6l091gg41kq3154osm.png-151.5kB

kube-proxy 部署設置

登陸到node節點

chmod +x proxy.sh 

./proxy.sh 192.168.100.13

./proxy.sh 192.168.100.14

ps -ef |grep proxy

image_1dipnbu7a1bu07lm9kerbn17dtvq.png-442kB

image_1dipnckqc1hmnjdsiv91bvf1sk107.png-459.4kB

運行一個nginx實例測試

# kubectl run nginx --image=nginx --replicas=3
# kubectl get pod
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
# kubectl get svc nginx

image_1dipo9ela1mkb8ok1hln1f5m46v10k.png-644.8kB

image_1dipob4om1gjup171q231064skc121.png-196.9kB

image_1dipobvpgsv7jig1r5gbsp12ms12e.png-359kB

image_1dipojvu7jgt1fgh1b9oe68hcn13b.png-385.6kB

image_1dipokd381qr53u3m4d17nb1qe313o.png-180.5kB

image_1dipokvdrrpnhff1itoclr9j6145.png-164.7kB

主節點 沒法 查看 日誌 

error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log nginx-7cdbd8cdc9-z7jpk))

----
解決方法:
  開放kubelet 的 匿名訪問權限

vim /opt/kubernetes/kubelet.config 
在最後加上:
authentication:
  anonymous:
    enabled: true
----

service kubelet restart 

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

image_1dipovssk1r7r1jr71jv1m1a1nda14i.png-134.6kB
image_1dkfjlqnn1bbo147210q31o553rg9.png-481.7kB
image_1dipplg6dh6tlsrav6bs1uau16c.png-76.2kB
image_1dkfjppep47n1lrqfc31gfvr7cm.png-467.9kB
image_1dipqfcpu19a11i1l1chrd6r3sh179.png-93.3kB


2.8 部署一個kubernetes UI 界面

下載地址連接:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

找到K8S 的 源碼包 解壓

cd /root/kubernetes/
tar -zxvf kubernetes-src.tar.gz

cd /root/kubernetes/cluster/addons/dashboard/

image_1dkfkd45m6rujcheu15lq1pms13.png-273.8kB

kubectl create -f dashboard-configmap.yaml

kubectl create -f dashboard-rbac.yaml

kubectl create -f dashboard-secret.yaml

image_1dkfl3o5s16ja13jv7od36a1gqn1g.png-337.2kB

vim dashboard-controller.yaml
改image:

image: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1

image_1dkfl5qik18m6oiu99c6vo1nm91t.png-125.3kB

kubectl create -f dashboard-controller.yaml

image_1dkfl7h752rtnqh7fn112r1iao2a.png-126.6kB

kubectl get pods -n kube-syetem

image_1dkfl9kmuj4116mbo5j1rvdhmu2n.png-160kB

kubectl get pods -n kube-system --all-namespaces

image_1dkflca164nm1bup1qjk1r55kuk34.png-229.8kB

修改: dashboard-service.yaml

vim dashborad-service.yaml 

增長:
  type: NodePort

kubectl create -f dashboard-service.yaml

image_1dkflkul13cvskqnfb1f41nq34h.png-183kB

image_1dkflnclhusv1lpv7s2k8klq25u.png-116.6kB

kubectl get svc -n kube-system

image_1dkflp363u9gvf2in11laf177o6b.png-170.7kB

kubectl get pods -o wide --all-namespaces

image_1dkfm2svm154610i2st6e9aaut78.png-227.4kB

打開瀏覽器 訪問

https://192.168.100.13:34392

使用 Firefox 瀏覽器

image_1dkfmhel21h8a1j131hdctjv1mnj85.png-300.1kB

使用 k8s-admin  令牌 登陸

k8s-admin.yaml

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---

kubectl create -f k8s-admin.yaml

image_1dkfmq3gr1i884s31cg6rj01nhp8i.png-105.5kB

kubectl get secret -n kube-system

kubectl describe secret dashboard-admin-token-g64n4 -n kube-system

找到最下面的令牌:

 eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZzY0bjQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTA4ZDA0OTQtZDQ2ZC0xMWU5LTkxMGYtMDAwYzI5ZjUyMjMxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.tqFByxlIY3eLjHWzA7nY5Sm3-cHz_vbNSSTCnbe91XKmwJDYSmN-b3XtkR2bWk0PC4UUyPr3HVXqW_tblgbBAOgmm22DI4yXmf0Rn82QBAYEHu-brCxb1u-9NRle09gjlsZtCiTggS5D7Pa-QNXZGYxDEwSPSi19kmvaNJIYVfmJCmTiyW3ObiSKYOLj_f21XOucdfr4lrIt0EA-TksfM3B0DfiEsu_nIGOWCEivh15XLm2hE-en45Y0cNH8XCTlMaOT-WmGUi9E1hZ9da9pKc0wKAuIUgtI25SrzhILabVxw9u-iar2YqFxUrsGf4u55TlJ74x9YKeCYFnqCVhsTg

image_1dkfn2dp8ii91pva1dnq10d714gp9.png-771.2kB

image_1dkfn4vs01uvs19j0i811khj17jcm.png-304.2kB

image_1dkfn7e4l14er41ff611pb1r6h23.png-421.8kB

相關文章
相關標籤/搜索