Kubernetes(k8s)集羣安裝

一:簡介

二:基礎環境安裝

1.系統環境

os Role ip Memory
Centos 7 master01 192.168.25.30 4G
Centos 7 node01 192.168.25.31 4G
Centos 7 node02 192.168.25.31 4G

2.關閉selinux

sed -i "s/SELINUX\=.*/SELINUX=disabled/g" /etc/selinux/config

3.關閉防火牆

systemctl disable firewalld && systemctl stop firewalld

4.修改主機名

hostnamectl set-hostname Role_name

5.添加hosts解析

echo -e "192.168.25.30 master01\n192.168.25.31 node01\n192.168.25.32 node02" >> /etc/hosts

6.設置k8s內核參數

設置內核參數node

cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

加載內核模塊mysql

modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/rc.local

使內核參數生效linux

sysctl -p /etc/sysctl.d/k8s.conf

7.關閉系統swap

swapoff -a

修改fstab文件,關閉swap的自動掛載。nginx

8.修改防火牆策略

/sbin/iptables -P FORWARD ACCEPT
echo  "sleep 60 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local

9.安裝依賴包

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget

10.時間同步

yum -y install ntpdate
/usr/sbin/ntpdate -u ntpserver1: ntp1.aliyun.com
/usr/sbin/ntpdate -u ntp1.aliyun.com

11.安裝docker-ce軟件

提示:master節點不須要安裝git

  1. 刪除自帶的dockergithub

    yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-selinux \
                      docker-engine-selinux \
                      docker-engine
  2. 安裝依賴包web

    yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
  3. 安裝yum源sql

    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. 安裝docker-cedocker

    yum -y install docker-ce
  5. 啓動,並設置開機自啓[安裝設置好flanneld後,再啓動docker]shell

    systemctl start docker && systemctl enable docker

11.安裝CFSSL

cfssl

export CFSSL_URL="https://pkg.cfssl.org/R1.2"
wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

三:建立CA證書和密鑰

​ kubernetes系統各組件須要使用TLS證書對通訊進行加密,本本檔使用CloudFlare的工具集cfssl來生成Certificate Authority(CA)證書和密鑰文件,CA是自簽名的證書,用來簽名後續建立的其餘TLS證書。

​ 如下操做都在master節點上執行,證書只須要建立一次便可,之後新增節點時,只須要將/etc/kubernetes/目錄下的證書拷貝到新節點便可。

1.建立CA配置文件

mkdir /root/ssl
cd /root/ssl

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF
  • ca-config.json:能夠定義多個profiles,分別指定不一樣的過時時間,使用場景等參數,後續在簽名證書時會使用到某個profile;
  • signing:表示該證書可用於簽名其餘證書;生成ca.pem證書中的CA=TRUE;
  • server auto:表示client能夠用該CA對server提供的證書進行驗證;
  • client auth:表示server能夠用該CA對client提供的證書進行驗證

2.建立CA證書籤名請求

cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
  • "CN":Common Name,kube-apiserver從證書中提取該字段做爲請求的用戶名(User name);瀏覽器檢驗該字段驗證網站是否合法;
  • 「O」:Organization,kube-apiserver從證書提取該字段做爲請求用戶所屬的組(Group);

3.生成CA證書和私鑰

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2018/03/29 14:38:31 [INFO] generating a new CA key and certificate from CSR
2018/03/29 14:38:31 [INFO] generate received request
2018/03/29 14:38:31 [INFO] received CSR
2018/03/29 14:38:31 [INFO] generating key: rsa-2048
2018/03/29 14:38:31 [INFO] encoded CSR
2018/03/29 14:38:31 [INFO] signed certificate with serial number 438768005817886692243142700194592359153651905696

4.建立kubernetes證書籤名請求文件

cat > kubernetes-csr.json << EOF
{
   "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.25.30",
      "192.168.25.31",
      "192.168.25.32",
      "10.254.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • hosts中的內容能夠爲空,便是按照上面的配置,向集羣中增長新節點也不須要從新生成證書;若是hosts不爲空,則須要指定受權使用該證書的IP或域名列表,因爲該證書後續被etcd集羣和kubernetes master集羣使用,因此上面分別指定了etcd集羣,kubernetes master集羣的主機IP和kuberunetes服務ip。

5.生成kubernetes證書和私鑰

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2018/03/29 14:46:12 [INFO] generate received request
2018/03/29 14:46:12 [INFO] received CSR
2018/03/29 14:46:12 [INFO] generating key: rsa-2048
2018/03/29 14:46:12 [INFO] encoded CSR
2018/03/29 14:46:12 [INFO] signed certificate with serial number 6955479006214073693226115919937339031303355422
2018/03/29 14:46:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
 
# ls kubernetes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

6.建立admin證書籤名請求文件

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • kube-apiserver使用RBAC對客戶端(如Kubelet,kube-proxy,Pod)請求進行受權。
  • kube-apiserver預約義了一些RBAC使用的RoleBindings,如cluster-admin將Group System:masters與Role cluster-admin綁定,該Role授予kube-apiserver的全部API的權限;
  • OU指定該證書的Group爲system:masters,kubelet使用該證書訪問kube-apiserver時,因爲證書爲CA簽名,因此認證經過,同時因爲證書用戶組爲通過預受權的system:masters,因此被授予訪問全部API的權限

7.生成admin證書和私鑰

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2018/03/29 14:57:01 [INFO] generate received request
2018/03/29 14:57:01 [INFO] received CSR
2018/03/29 14:57:01 [INFO] generating key: rsa-2048
2018/03/29 14:57:02 [INFO] encoded CSR
2018/03/29 14:57:02 [INFO] signed certificate with serial number 356467939883849041935828635530693821955945645537
2018/03/29 14:57:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements")

# ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

8.建立kube-proxy證書籤名請求文件

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
  • CN指定該證書的User爲system:kube-proxy;
  • kube-apiserver預約義的RoleBinding cluster-admin將User system:kube-proxy與Role System:node-proxies綁定,該Role授予了調用kube-apiserver Proxy相關API的權限;

9.生成kube-proxy客戶端證書和私鑰

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
2018/03/29 15:09:36 [INFO] generate received request
2018/03/29 15:09:36 [INFO] received CSR
2018/03/29 15:09:36 [INFO] generating key: rsa-2048
2018/03/29 15:09:36 [INFO] encoded CSR
2018/03/29 15:09:36 [INFO] signed certificate with serial number 225974417080991591210780916866547658424323006961
2018/03/29 15:09:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

10.證書分發

將生成的證書和密鑰文件(後綴爲pem)拷貝到全部機器的/etc/kubernetes/ssl目錄下

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl

ssh node01 "mkdir -p /etc/kubernetes/ssl"
scp *.pem node01:/etc/kubernetes/ssl

ssh node02 "mkdir -p /etc/kubernetes/ssl"
scp *.pem node02:/etc/kubernetes/ssl

四:部署Etcd集羣

​ 在三個節點都須要安裝etcd,下面的操做在每臺機器上操做一遍。

1.下載etcd安裝包並生成命令

wget https://github.com/coreos/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
tar -xvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/etcd* /usr/local/bin

# 生成如下兩條命令
# etcd
etcd     etcdctl

2.建立工做目錄

mkdir -p /var/lib/etcd

3.建立系統服務文件

master01

cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name master01 \\
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --initial-advertise-peer-urls https://192.168.25.30:2380 \\
  --listen-peer-urls https://192.168.25.30:2380 \\
  --listen-client-urls https://192.168.25.30:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls https://192.168.25.30:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

node01

cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name node01 \\
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --initial-advertise-peer-urls https://192.168.25.31:2380 \\
  --listen-peer-urls https://192.168.25.31:2380 \\
  --listen-client-urls https://192.168.25.31:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls https://192.168.25.31:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

node02

cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name node02 \\
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --initial-advertise-peer-urls https://192.168.25.32:2380 \\
  --listen-peer-urls https://192.168.25.32:2380 \\
  --listen-client-urls https://192.168.25.32:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls https://192.168.25.32:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 指定etcd的工做目錄爲/var/lib/etcd,數據目錄爲/var/lib/etcd,需在啓動服務前建立這個目錄,不然啓動服務的時候會報錯「Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory」;
  • 爲了保證通訊安全,須要指定etcd的公私鑰(cert-file和key-file),Peers通訊的公私鑰和CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
  • 建立kubernetes.pem證書時使用的kubernestes-csr.json文件的hosts字段包含全部的etcd節點的IP,不然證書校驗會出錯;
  • --initial-cluster-state值爲new時,-name的參數值必須位於-initial-cluster列表中;

4.啓動etcd服務

cp etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

5.驗證etcd服務

# etcdctl \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
    --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
    cluster-health
member 2ea4d6efe7f32da is healthy: got healthy result from https://192.168.25.32:2379
member 5246473f59267039 is healthy: got healthy result from https://192.168.25.31:2379
member be723b813b44392b is healthy: got healthy result from https://192.168.25.30:2379
cluster is healthy

五:部署Flannel

在node節點上都須要部署安裝

1.下載安裝Flannel

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
mkdir flannel
tar -xzvf flannel-v0.9.1-linux-amd64.tar.gz -C flannel
cp flannel/{flanneld,mk-docker-opts.sh} /usr/local/bin

2.向etcd中寫入網段信息,只須要在一臺執行便可

etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kubernetes/network

etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

3.建立服務啓動文件

cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
  -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\
  -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\
  -etcd-endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\
  -etcd-prefix=/kubernetes/network
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
  • mk-docker-opts.sh:將分配給flanneld的pod子網網段信息寫入到/run/flannel/docker文件中,後續docker啓動時使用這個文件中參數值設置docker0網橋
  • flanneld使用系統缺省路由所在的接口和其餘節點通訊,對於有多個網絡接口的機器(如內網和公網),可以使用-iface=enpxx選項值指定通訊接口;

4.啓動Flanneld服務

mv flanneld.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

5.檢查flanneld服務狀態

# /usr/local/bin/etcdctl \
 --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \
 --ca-file=/etc/kubernetes/ssl/ca.pem \
 --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
 --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
 ls /kubernetes/network/subnets
 /kubernetes/network/subnets/172.30.82.0-24
/kubernetes/network/subnets/172.30.1.0-24
/kubernetes/network/subnets/172.30.73.0-24

6.配置docker使用flanneld網絡

/usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
# 修改
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
# 新增
EnvironmentFile=/run/flannel/docker

ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
  • flanneld 啓動時將網絡配置寫入到 /run/flannel/docker 文件中的變量 DOCKER_NETWORK_OPTIONS,dockerd 命令行上指定該變量值來設置 docker0 網橋參數;
  • 若是指定了多個 EnvironmentFile 選項,則必須將 /run/flannel/docker 放在最後(確保 docker0 使用 flanneld 生成的 bip 參數);
  • 不能關閉默認開啓的 –iptables 和 –ip-masq 選項;
  • 若是內核版本比較新,建議使用 overlay 存儲驅動;
  • –exec-opt native.cgroupdriver=systemd參數能夠指定爲」cgroupfs」或者「systemd」

7.啓動docker

systemctl daemon-reload && systemctl start docker && systemctl enable docker

六:部署kubectl工具

​ kubectl是kubernetes的集羣管理工具,任何節點經過kubetcl均可以管理整個k8s集羣。本文檔部署在master01這個節點,部署成功後會生成/root/.kube/config文件,kubectl就是經過這個獲取kube-apiserver地址,證書,用戶名等信息。

1.下載安裝包

wget https://dl.k8s.io/v1.8.6/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
sudo cp kubernetes/client/bin/kube* /usr/local/bin/
chmod a+x /usr/local/bin/kube*
export PATH=/root/local/bin:$PATH

2.建立/root/.kube/config文件

# 設置集羣參數,--server指定Master節點ip
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.25.30:6443

# 設置客戶端認證參數
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true \
  --client-key=/etc/kubernetes/ssl/admin-key.pem

# 設置上下文參數
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin

# 設置默認上下文
kubectl config use-context kubernetes
  • admin.pem:證書O字段值爲system:masters,kube-apiserver預約義的RoleBinding cluster-admin將Group system:master與Role cluster-admin綁定,該Role 授予了調用Kube-apiserver相關的API權限

3.建立bootstartp.kubeconfig文件

#生成token 變量
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

mv token.csv /etc/kubernetes/

# 設置集羣參數--server爲master節點ip
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.25.30:6443 \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

mv bootstrap.kubeconfig /etc/kubernetes/

4.建立kube-proxy.kubeconfig

# 設置集羣參數 --server參數爲master ip
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.25.30:6443 \
  --kubeconfig=kube-proxy.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
mv kube-proxy.kubeconfig /etc/kubernetes/
  • 設置集羣參數和客戶端認證參數,--embed-certs都爲true,這會將certificate-authority,client-cretificate和client-key指向的證書文件內容寫入到生成的kube-proxy.kebuconfig文件中;
  • kube-proxy.pem證書中CN爲system:kube-proxy,kube-apiserver預約義的RoleBinding cluster-admin將User system:kube-proxy與Role system:node-proxy綁定,該Role授予了調用kube-apiserver Proxy相關的api權限;

5.將生成的配置文件拷貝到其餘的節點

scp /etc/kubernetes/kube-proxy.kubeconfig node01:/etc/kubernetes/
scp /etc/kubernetes/kube-proxy.kubeconfig node02:/etc/kubernetes/
  
scp /etc/kubernetes/bootstrap.kubeconfig node01:/etc/kubernetes/
scp /etc/kubernetes/bootstrap.kubeconfig node02:/etc/kubernetes/

七:部署master節點

1.下載安裝文件

wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

2.部署apiserver服務

配置kube-apiserver服務管理文件

cat > kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --logtostderr=true \\
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \\
  --advertise-address=192.168.25.30 \\
  --bind-address=192.168.25.30 \\
  --insecure-bind-address=127.0.0.1 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
  --kubelet-https=true \\
  --enable-bootstrap-token-auth \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --service-node-port-range=8400-10000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --etcd-servers=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\
  --enable-swagger-ui=true \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/lib/audit.log \\
  --event-ttl=1h \\
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • --authorization-mode=RBAC 指定在安全端口使用RBAC模式,拒絕未經過受權的請求;
  • kube-scheduler,kube-controller-manager通常和kube-apiserver部署在同一臺機器上,他們使用非安全端口和kube-apiserver通訊;
  • kubelet,kube-proxy,kubectl部署在其餘Node節點,若是經過安全端口訪問kube-apiserver,則必須先經過TLS證書認證,再經過RBAC受權;
  • kube-proxy,kubectl經過在使用的證書裏指定相關的User,Group來達到經過RBAC受權的目的。
  • Bootstartp:若是使用了kubelet TLS Bootstartp機制,則不能再指定 –kubelet-certificate-authority、–kubelet-client-certificate 和 –kubelet-client-key 選項,不然後續kube-apiserver校驗kubelet證書時出現」x509: certificate signed by unknown authority「 錯誤;
  • --admission-control值必須包含ServiceAccount,不然部署集羣插件時會失敗;
  • --bind-address不能爲127.0.0.1;
  • --runtime-config:配置rbac.authorization.k8s.io/v1beta1,表示運行時的apiVersion;
  • service-cluster-ip-range:指定Service cluster ip段地址,該地址路由不可達;
  • --service-node-port-range:指定NodePort的端口範圍

確實狀況下,kubernetes對像保存在etcd的/registry路徑下,能夠經過--etcd-prefix參數進行跳轉

啓動服務,並設置開啓自啓

cp kube-apiserver.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

3.部署manager服務

生成服務啓動腳本

cat > kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --logtostderr=true  \\
  --address=127.0.0.1 \\
  --master=http://127.0.0.1:8080 \\
  --allocate-node-cidrs=true \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --cluster-cidr=172.30.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --leader-elect=true \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • --address的值必須爲127.0.0.1,應爲當前kube-apiserver指望scheduler和conntroller-manaager在同一臺機器上;
  • --master=http://{master_ip}:8080:使用非安全的8080端口與kube-apiserver通訊
  • --cluster-cidr指定Cluster中Pod的CIDR範圍,該網段在各Node必須路由可達(flanneld保證)
  • --service-cluster-ip-range參數指定Cluster中Service的CIDR範圍,該網絡在各Node間必須路由不可達,必須與kube-apiserver中的參數保持一致;
  • --cluster-signing-*指定的證書和私鑰文件用來簽名TLS BootStrap建立的證書和私鑰
  • --root-ca-file用來對kube-apiserver證書進行校驗,指定該參數後,纔會在Pod容器的ServiceAccount中放置該CA證書文件
  • --leader-elect=true部署多臺master集羣時選舉產生一直處於工做狀態的kube-controller-manager進程;

啓動服務

cp kube-controller-manager.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manage

4.部署scheduler服務

配置kube-scheduler

cat > kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --logtostderr=true \\
  --address=127.0.0.1 \\
  --master=http://127.0.0.1:8080 \\
  --leader-elect=true \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • --address必須爲127.0.0.1,應爲當前kube-apiserver指望scheduler和contorller-manager在同一主機;
  • master=http://{MASTER_IP}:8080:使用非安全 8080 端口與 kube-apiserver 通訊;
  • –leader-elect=true 部署多臺機器組成的 master 集羣時選舉產生一處於工做狀態的 kube-controller-manager 進程;

啓動kube-scheduler

cp kube-scheduler.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

5.驗證master節點

# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}

八:部署Node節點

1.部署kubelet服務

​ kubelet在啓動時向kube-apiserver發送TLS bootstrapping請求,須要先將bootstrap token文件中的kubelet-bootstrap用戶賦予system:node-bootstrapper角色,而後kubelet纔有權限建立認證請求。

受權,在master上運行一次便可

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

下載和安裝kubelet和kube-proxy

wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/

建立kubelet工做目錄

mkdir /var/lib/kubelet

配置kubelt

master01

cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
  --address=192.168.25.30 \\    
  --hostname-override=192.168.25.30 \\
  --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\
  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --require-kubeconfig \\
  --cert-dir=/etc/kubernetes/ssl \\
  --container-runtime=docker \\
  --cluster-dns=10.254.0.2 \\
  --cluster-domain=cluster.local \\
  --hairpin-mode promiscuous-bridge \\
  --allow-privileged=true \\
  --serialize-image-pulls=false \\
  --register-node=true \\
  --logtostderr=true \\
  --cgroup-driver=cgroupfs  \\
  --v=2

Restart=on-failure
KillMode=process
LimitNOFILE=65536
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • --address:本機IP,不能設置爲127.0.0.1,不然後續Pods訪問kubelet的API接口時會失敗,由於 Pods 訪問的 127.0.0.1 指向本身,而不是 kubelet;
  • --hostname-overeide:本機IP;
  • --cgroup-driver配置成cgroupfs(保持docker和kubelet中的cgroup driver配置一致便可);
  • --experimental-bootstrap-kubeconfig指向bootstrap kubeconfig文件,kubelet使用該文件中的用戶名和token向kube-apiserver發送TLS Bootstrapping請求;
  • 管理員經過了CSR請求後,kubelet自動在--cert-dir目錄建立證書和私鑰文件(kubelet-client.crt和kubelet-client.key),而後寫入--kubeconfig文件(自動建立)
  • 建議在--kubeconfig配置文件中指定kube-apiserver地址,若是未指定--api-servers選項,則必須指定--require-kubeconfig選項後才從配置文件中讀取kube-apiserver的地址,不然kubelet啓動後會找不到kube-apiserver(日誌中提示找不到API server),kubectl get nodes不會返回對應的Node信息;
  • --cluster-dns指定kubedns的Service ip(能夠先分配,後續建立kubedns服務時指定該IP),--cluster-domain指定域名後綴,這兩個參數同時配置纔會生效;
  • --cluster-domain指定pod啓動時/etc/resolve.conf文件中的search domain,起初咱們將其配置成了 cluster.local.,這樣在解析 service 的 DNS 名稱時是正常的,但是在解析 headless service 中的 FQDN pod name 的時候卻錯誤,所以咱們將其修改成 cluster.local,去掉嘴後面的 」點號「 就能夠解決該問題;
  • --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次啓動kubelet以前並不存在,請看下文,當經過CSR請求後,會自動生成,若是你的節點節點上已經生成了~/.kube/config文件,你能夠將該文件拷貝到該路徑面,並命名爲kubelet.kubeconfig文件,全部的節點能夠共用同一個config文件,這樣新添加節點時就不須要再建立CSR請求就能自動添加到kubernetes集羣中,一樣,在任意可以訪問到kubernetes集羣的主機上使用kubectl --kubeconfig命令操做集羣,只要使用~/.kube/config文件就能經過權限認證,應爲這個文件的認證信息爲admin,對集羣有全部權限。

啓動kubelet服務

cp kubelet.service /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

2.執行TLS證書受權請求

kubelet首次啓動時像kube-apiserver發送證書籤名求情,必須經過受權後,纔會添加到集羣。

查詢受權請求

# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro   3m        kubelet-bootstrap   Pending
node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs   3m        kubelet-bootstrap   Pending
node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8   3m        kubelet-bootstrap   Pending

贊成受權請求

# kubectl certificate approve node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro
certificatesigningrequest "node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro" approved

# kubectl certificate approve node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs
certificatesigningrequest "node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs" approved

# kubectl certificate approve node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8
certificatesigningrequest "node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8" approved

查看全部集羣節點

# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.25.30   Ready     <none>    15m       v1.8.6
192.168.25.31   Ready     <none>    15m       v1.8.6
192.168.25.32   Ready     <none>    15m       v1.8.6

3.部署kube-proxy服務

建立工做目錄

mkdir -p /var/lib/kube-proxy

配置kube-proxy服務

cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --bind-address=192.168.25.30 \\
  --hostname-override=192.168.25.30 \\
  --cluster-cidr=10.254.0.0/16 \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • --bind-address:本機ip
  • --hostname-override:本機ip,必須需kubelet的值一致,不然kube-proxy啓動後找不到node,從而影響任何iptables規則
  • --cluster-cidr:必須與kube-apiserver的--service-cluster-ip-range選項值一致,kube-proxy根據--clister-cidr判斷集羣內部和外部流量,指定--cluster-cidr或--masquerade-all選項後kube-proxy纔會對訪問service ip的請求作SNAT;
  • –kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用戶名、證書、祕鑰等請求和認證信息;
  • 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

啓動kube-proxy服務

cp kube-proxy.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

九:插件安裝

因爲默認鏡像爲谷歌鏡像,因此是須要修改的,因此用docker hup作了跳轉,修改好的yamk文件下載地址以下:

百度網盤(o3z9)

1.dns插件

wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.6/kubernetes.tar.gz
tar xzvf kubernetes.tar.gz

cd /root/kubernetes/cluster/addons/dns
mv  kubedns-svc.yaml.sed kubedns-svc.yaml
#把文件中$DNS_SERVER_IP替換成10.254.0.2
sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml

mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml
#把$DNS_DOMAIN替換成cluster.local
sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml

ls *.yaml
kubedns-cm.yaml  kubedns-controller.yaml  kubedns-sa.yaml  kubedns-svc.yaml

kubectl create -f .

2.dashboard插件

下載部署文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改部署文件

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  # 新增
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      # 新增
      nodePort: 8510
  selector:
    k8s-app: kubernetes-dashboard

建立pod

kubectl create -f kubernetes-dashboard.yaml

部署認證服務

cat > ./kubernetes-dashboard-admin.rbac.yaml << EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF

kubectl create -f kubernetes-dashboard-admin.rbac.yaml

訪問地址,目前只能火狐訪問

https://192.168.25.30:8510

3.heapster插件

下載安裝文件

wget https://github.com/kubernetes/heapster/archive/v1.5.0.tar.gz
tar xzvf ./v1.5.0.tar.gz
cd ./heapster-1.5.0/

kubectl create -f deploy/kube-config/influxdb/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml123456

確認全部pod都正常啓動

kubectl get pods --all-namespaces

十:經常使用服務部署

1.nginx

部署文件:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
     app: nginx
spec:
  containers:
     - name: nginx
       image: registry.cn-qingdao.aliyuncs.com/k8/nginx:1.9.0
       imagePullPolicy: IfNotPresent
       ports:
       - containerPort: 80
  restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  sessionAffinity: ClientIP
  selector:
    app: nginx
  ports:
    # 將容器的80端口映射到master主機的8888端口
    - port: 80
      nodePort: 8888

2.mysql

部署文件:

apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
     app: mysql
spec:
  containers:
     - name: mysql
       image: mysql
       # 環境變量
       env:
       - name: MYSQL_ROOT_PASSWORD
         value: "123456"
       imagePullPolicy: IfNotPresent
       # 容器暴露端口
       ports:
       - containerPort: 3306
  restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
spec:
  type: NodePort
  sessionAffinity: ClientIP
  selector:
    app: mysql
  ports:
    - port: 3306
      nodePort: 9306

十一:經常使用命令

1.查看kubelet log

journalctl -u kubelet -f

2.查看pods信息

kubectl get pods --all-namespaces

3.查看service信息

kubectl get pods --all-namespaces
相關文章
相關標籤/搜索