kubeadm搭建k8s(v1.17.3)集羣與Kubespere最小化安裝

 前言:不斷學習就是程序員的宿命node

1、kubeadm搭建k8s集羣

一、系統準備

 3臺虛擬機python

IP 主機名 配置
192.168.56.100 node01 4C、4G
192.168.56.101 node02 4C、4G
192.168.56.102 node03 4C、4G

二、環境配置(3臺)

2.1 關閉防火牆(3個節點)

systemctl stop firewalld
systemctl disable firewalld

2.2 關閉selinux(3個節點)

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

2.3 關閉swap(3個節點)

swapoff -a #臨時關閉
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久關閉
free -g #驗證,swap必須爲0
              total        used        free      shared  buff/cache   available
Mem:              3           0           3           0           0           3
Swap:             0           0           0

2.4 配置主機名(3個節點)

查看主機名:hostname
若是主機名不正確,能夠經過「hostnamectl set-hostname :指定新的hostname」命令來進行修改 linux

[root@k8s-node1 ~]# ip route show 
default via 10.0.2.1 dev eth0 proto dhcp metric 101 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 101 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 100 
[root@k8s-node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ac:35:31 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 1184sec preferred_lft 1184sec
    inet6 fe80::a00:27ff:feac:3531/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:02:58:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe02:5816/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-node1 ~]# cat /etc/hosts  #3個節點
127.0.0.1	k8s-node1	k8s-node1
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.2.15 k8s-node1
10.0.2.4 k8s-node2
10.0.2.5 k8s-node3

2.5 配置內核參數

將橋接的IPv4流量傳遞到iptables的鏈nginx

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF
[root@k8s-node1 ~]#  sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

三、安裝Docker(3臺)

Kubenetes默認CRI(容器運行時)爲Docker,所以先安裝Docker。git

3.1安裝Docker

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

3.2安裝Docker -CE(3臺)

sudo yum install -y yum-utils

 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

 yum-config-manager  --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
 sudo yum -y install docker-ce docker-ce-cli containerd.io

3.3配置鏡像加速(3臺)

這裏使用的是 阿里雲鏡像服務程序員

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://0v8k2rvr.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

3.4啓動Docker && 設置docker開機啓動(3臺)

[root@node01 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

四、安裝kubeadm、kubelet、kubectl

4.1添加阿里yum源(3臺)

更多詳情見github

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.2安裝kubeadm、kubelet、kubectl

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3 --setopt=obsoletes=0

4.3設置開機啓動

systemctl enable kubelet && systemctl start kubelet

4.4查看kubelet狀態

#systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2020-06-26 14:53:12 CST; 4s ago
     Docs: https://kubernetes.io/docs/
  Process: 10192 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 10192 (code=exited, status=255)

Jun 26 14:53:12 node01 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:53:12 node01 systemd[1]: kubelet.service failed.

4.5查看kubelet版本

kubelet --version
Kubernetes v1.17.3

五、部署k8s-master

5.1在Master節點上,建立並執行master_images.sh,腳本內容以下

#!/bin/bash

images=(
	kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
	kube-controller-manager:v1.17.3
	kube-scheduler:v1.17.3
	coredns:1.6.5
	etcd:3.4.3-0
    pause:3.1
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done
[root@k8s-node1 ~]# ./master_images.sh    #下載鏡像
修改腳本註釋從新打tag
[root@k8s-node1 ~]# ./master_images.sh    #打tag
[root@k8s-node1 ~]# docker images    #查看剛剛下載docker鏡像
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                                                         v1.17.3             ae853e93800d        4 months ago        116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.17.3             ae853e93800d        4 months ago        116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3             b0f1517c1f4b        4 months ago        161MB
k8s.gcr.io/kube-controller-manager                                            v1.17.3             b0f1517c1f4b        4 months ago        161MB
k8s.gcr.io/kube-apiserver                                                     v1.17.3             90d27391b780        4 months ago        171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.17.3             90d27391b780        4 months ago        171MB
k8s.gcr.io/kube-scheduler                                                     v1.17.3             d109c0821a2b        4 months ago        94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.17.3             d109c0821a2b        4 months ago        94.4MB
k8s.gcr.io/coredns                                                            1.6.5               70f311871ae1        7 months ago        41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        7 months ago        41.6MB
k8s.gcr.io/etcd                                                               3.4.3-0             303ce5db0e90        8 months ago        288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        8 months ago        288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
k8s.gcr.io/pause                                                              3.1                 da86e6ba6ca1        2 years ago         742kB

5.2初始化kubeadm(主節點)

查看網卡地址docker

[root@k8s-node1 ~]# ip addr   #使用默認網卡初始化
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ac:35:31 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 824sec preferred_lft 824sec
    inet6 fe80::a00:27ff:feac:3531/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:02:58:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe02:5816/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:2f:70:a1:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

主節點master初始化json

kubeadm init --kubernetes-version=1.17.3  \
--apiserver-advertise-address=10.0.2.15   \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

注:bootstrap

  • --apiserver-advertise-address=10.0.2.15 :這裏的IP地址是master主機的地址,爲上面的eth0網卡的地址
    執行結果以下:
W0627 05:59:23.420885    2230 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0627 05:59:23.420970    2230 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0627 05:59:36.078833    2230 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0627 05:59:36.079753    2230 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.002443 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: z2roeo.9tzndilx8gnjjqfj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.15:6443 --token z2roeo.9tzndilx8gnjjqfj \
    --discovery-token-ca-cert-hash sha256:7cfbf6693daa652f2af8e45594c4a66f45a8d081711e7e17a45cc42abfe7792f

因爲默認拉取鏡像地址k8s.cr.io國內沒法訪問,這裏指定阿里雲倉庫地址。能夠手動按照咱們的images.sh先拉取鏡像。

地址變爲:registry.aliyuncs.com/googole_containers也能夠。
科普:無類別域間路由(Classless Inter-Domain Routing 、CIDR)是一個用於給用戶分配IP地址以及在互聯網上有效第路由IP數據包的對IP地址進行歸類的方法。
拉取可能失敗,須要下載鏡像。

運行完成提早複製:加入集羣的令牌。

5.3測試kubectl(主節點執行)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

詳細部署文檔

[root@node02 ~]# kubectl get nodes    #獲取全部節點,目前Master狀態爲notready。等待網絡加入完成便可。
NAME     STATUS     ROLES    AGE   VERSION
node02   NotReady   master   97s   v1.17.3
[root@node02 ~]# journalctl -u kubelet    #查看kubelet日誌
-- Logs begin at Fri 2020-06-26 22:32:06 CST, end at Fri 2020-06-26 15:31:07 CST. --
Jun 26 14:52:40 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:52:40 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:52:40 node02 kubelet[11114]: F0626 14:52:40.203059   11114 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:52:40 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:52:40 node02 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:52:40 node02 systemd[1]: kubelet.service failed.
Jun 26 14:52:50 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 14:52:50 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:52:50 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:52:50 node02 kubelet[11128]: F0626 14:52:50.311073   11128 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:52:50 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:52:50 node02 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:52:50 node02 systemd[1]: kubelet.service failed.
Jun 26 14:53:00 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 14:53:00 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:53:00 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:53:00 node02 kubelet[11142]: F0626 14:53:00.562832   11142 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:53:00 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:53:00 node02 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:53:00 node02 systemd[1]: kubelet.service failed.
Jun 26 14:53:10 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 14:53:10 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:53:10 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:53:10 node02 kubelet[11157]: F0626 14:53:10.810988   11157 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:53:10 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:53:10 node02 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:53:10 node02 systemd[1]: kubelet.service failed.
Jun 26 14:53:21 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 14:53:21 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:53:21 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:53:21 node02 kubelet[11171]: F0626 14:53:21.061248   11171 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:53:21 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:53:21 node02 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 14:53:21 node02 systemd[1]: kubelet.service failed.
Jun 26 14:53:31 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 14:53:31 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 14:53:31 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 14:53:31 node02 kubelet[11185]: F0626 14:53:31.311175   11185 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
Jun 26 14:53:31 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 14:53:31 node02 systemd[1]: Unit kubelet.service entered failed state.

5.4安裝POD網絡插件(CNI)

在master節點上執行按照POD網絡插件

kubectl apply -f \
https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml

以上地址可能被牆,能夠直接獲取本地已經下載的flannel.yml運行便可,如:

本地flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
[root@node02 ~]# ll
total 20
-rw-r--r-- 1 root root 15016 Feb 26 23:05 kube-flannel.yml
-rwx------ 1 root root   392 Jun 26 14:57 master_images.sh
[root@node02 ~]# kubectl apply -f  kube-flannel.yml   ###主節點執行
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

同時flannel.yml中指定的images訪問不到能夠去docker hub找一個wget yml地址
vi 修改yml 全部amd64的地址修改了便可
等待大約3分鐘

[root@k8s-node1 k8s]#  kubectl get pods --all-namespaces  #查看全部名稱空間的pods
NAMESPACE     NAME                                READY   STATUS     RESTARTS   AGE
kube-system   coredns-9d85f5447-bmwwg             1/1     Running    0          10m
kube-system   coredns-9d85f5447-qwd5q             1/1     Running    0          10m
kube-system   etcd-k8s-node1                      1/1     Running    0          10m
kube-system   kube-apiserver-k8s-node1            1/1     Running    0          10m
kube-system   kube-controller-manager-k8s-node1   1/1     Running    0          10m
kube-system   kube-flannel-ds-amd64-cn6m9         0/1     Init:0/1   0          55s
kube-system   kube-flannel-ds-amd64-kbbhz         1/1     Running    0          4m11s
kube-system   kube-flannel-ds-amd64-lll8c         0/1     Init:0/1   0          52s
kube-system   kube-proxy-df9jw                    1/1     Running    0          10m
kube-system   kube-proxy-kwg4s                    1/1     Running    0          52s
kube-system   kube-proxy-t5pkz                    1/1     Running    0          55s
kube-system   kube-scheduler-k8s-node1            1/1     Running    0          10m

$ ip link set cni0 down 若是網絡出現問題,關閉cni0,重啓虛擬機繼續測試
執行watch kubectl get pod -n kube-system -o wide 監控pod進度
等待3-10分鐘,徹底都是running之後繼續

查看命名空間:

[root@k8s-node1 ~]#  kubectl get ns
NAME              STATUS   AGE
default           Active   8m43s
kube-node-lease   Active   8m44s
kube-public       Active   8m44s
kube-system       Active   8m44s

查看master上的節點信息:

[root@k8s-node1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   13m   v1.17.3   #status爲ready纔可以執行下面的命令

5.5node02和node03節點加入集羣

最後再次執行,而且分別在「k8s-node2」和「k8s-node3」上也執行這裏命令:(#主節點初始化完成後打印)

[root@k8s-node2 ~]# kubeadm join 10.0.2.15:6443 --token z2roeo.9tzndilx8gnjjqfj     --discovery-token-ca-cert-hash sha256:7cfbf6693daa652f2af8e45594c4a66f45a8d081711e7e17a45cc42abfe7792f
W0626 15:56:39.223689   10631 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

主節點查看

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS     ROLES    AGE     VERSION
k8s-node1   Ready      master   9m38s   v1.17.3
k8s-node2   NotReady   <none>   9s      v1.17.3
k8s-node3   NotReady   <none>   6s      v1.17.3

監控pod進度

watch kubectl get pod -n kube-system -o wide

等到全部的status都變爲running狀態後,再次查看節點信息:

[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    master   11m    v1.17.3
k8s-node2   Ready    <none>   113s   v1.17.3
k8s-node3   Ready    <none>   110s   v1.17.3

5.6token過時處理

在node節點中執行,向集羣中添加新的節點,執行在kubeadm init 輸出的kubeadm join命令;
確保node節點成功:
token過時怎麼辦

kubeadm token create --print-join-command

5.7集羣查看

[root@k8s-node1 k8s]# kubectl get nodes #主節點執行查看全部節點
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    master   11m    v1.17.3
k8s-node2   Ready    <none>   113s   v1.17.3
k8s-node3   Ready    <none>   110s   v1.17.3
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces #主節點查看全部pod
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-9d85f5447-bmwwg             1/1     Running   0          11m
kube-system   coredns-9d85f5447-qwd5q             1/1     Running   0          11m
kube-system   etcd-k8s-node1                      1/1     Running   0          11m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          11m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          11m
kube-system   kube-flannel-ds-amd64-cn6m9         1/1     Running   0          2m29s
kube-system   kube-flannel-ds-amd64-kbbhz         1/1     Running   0          5m45s
kube-system   kube-flannel-ds-amd64-lll8c         1/1     Running   0          2m26s
kube-system   kube-proxy-df9jw                    1/1     Running   0          11m
kube-system   kube-proxy-kwg4s                    1/1     Running   0          2m26s
kube-system   kube-proxy-t5pkz                    1/1     Running   0          2m29s
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          11m
  • 至此k8s集羣搭建完成,接下來進行kubespere的定製化安裝

2、kubespere定製化安裝

kubespere官網地址

詳情見
前提條件以下:

1.安裝helm(master節點執行)

helm是kubernetes的包管理器。包管理器相似於在Ubuntu中使用的apt,centos中的yum或者python中的pip同樣,可以快速查找,下載和安裝軟件包。Helm有客戶端組件helm和服務端組件Tiller組成,可以將一組K8S資源打包統一管理,是查找、共享和使用爲Kubernetes構建的軟件的最佳方式。

[root@k8s-node1 k8s]# curl -L https://git.io/get_helm.sh|bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
100  7185  100  7185    0     0   1069      0  0:00:06  0:00:06 --:--:-- 12761
Downloading https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm

#因爲被牆的緣由,使用咱們給定的get_helm.sh
[root@node02 ~]# ./get_helm.sh 
Downloading https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
[root@node02 ~]# helm version   #驗證版本
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Error: could not find tiller
[root@node02 ~]# kubectl apply -f helm-rabc.yml   #建立權限(master執行)
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

2.安裝Tilller(Master執行)

[root@node02 ~]# helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300 
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
  • --tiller-image 指定鏡像,不然會被牆,等待節點上部署的tiller完成便可。
[root@node02 ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-798ss          1/1     Running   0          88m
coredns-9d85f5447-pz4wr          1/1     Running   0          88m
etcd-node02                      1/1     Running   0          88m
kube-apiserver-node02            1/1     Running   0          88m
kube-controller-manager-node02   1/1     Running   0          88m
kube-flannel-ds-amd64-9x6lh      1/1     Running   0          74m
kube-flannel-ds-amd64-s77xm      1/1     Running   0          83m
kube-flannel-ds-amd64-t8wth      1/1     Running   0          60m
kube-proxy-2vbcp                 1/1     Running   0          60m
kube-proxy-bd7zp                 1/1     Running   0          74m
kube-proxy-lk459                 1/1     Running   0          88m
kube-scheduler-node02            1/1     Running   0          88m
tiller-deploy-5fdc6844fb-zwbz7   1/1     Running   0          38s

[root@node02 ~]# kubectl get node -o wide  #查看集羣全部節點信息
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
node01   Ready    <none>   60m   v1.17.3   172.16.174.134   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://19.3.12
node02   Ready    master   88m   v1.17.3   172.16.174.133   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64    docker://19.3.12
node03   Ready    <none>   74m   v1.17.3   172.16.193.3     <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://19.3.12

2.1測試問題:

[root@k8s-node1 k8s]# helm install stable/nginx-ingress --name nginx-ingress
Error: release nginx-ingress failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:tiller" cannot get resource "namespaces" in API group "" in the namespace "default"

2.2解決方案:

[root@k8s-node1 k8s]# kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
[root@k8s-node1 k8s]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[root@k8s-node1 k8s]# 
[root@k8s-node1 k8s]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.apps/tiller-deploy patched (no change)
[root@k8s-node1 k8s]#  helm install --name nginx-ingress --set rbac.create=true stable/nginx-ingress

[root@k8s-node1 k8s]#  helm install --name nginx-ingress --set rbac.create=true stable/nginx-ingress
NAME:   nginx-ingress
LAST DEPLOYED: Sat Jun 27 10:52:53 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME           AGE
nginx-ingress  0s

==> v1/ClusterRoleBinding
NAME           AGE
nginx-ingress  0s

==> v1/Deployment
NAME                           READY  UP-TO-DATE  AVAILABLE  AGE
nginx-ingress-controller       0/1    1           0          0s
nginx-ingress-default-backend  0/1    1           0          0s

==> v1/Pod(related)
NAME                                            READY  STATUS             RESTARTS  AGE
nginx-ingress-controller-5989bf7f8f-p7rck       0/1    ContainerCreating  0         0s
nginx-ingress-default-backend-5b967cf596-4s9qn  0/1    ContainerCreating  0         0s
nginx-ingress-controller-5989bf7f8f-p7rck       0/1    ContainerCreating  0         0s
nginx-ingress-default-backend-5b967cf596-4s9qn  0/1    ContainerCreating  0         0s

==> v1/Role
NAME           AGE
nginx-ingress  0s

==> v1/RoleBinding
NAME           AGE
nginx-ingress  0s

==> v1/Service
NAME                           TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
nginx-ingress-controller       LoadBalancer  10.96.138.118  <pending>    80:31494/TCP,443:31918/TCP  0s
nginx-ingress-default-backend  ClusterIP     10.96.205.77   <none>       80/TCP                      0s

==> v1/ServiceAccount
NAME                   SECRETS  AGE
nginx-ingress          1        0s
nginx-ingress-backend  1        0s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

3.安裝openEBS

安裝過程參考官網

3.1確認 master 節點是否有 Taint,以下看到 master 節點有 Taint

[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

3.2去掉 master 節點的 Taint

[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
node/k8s-node1 untainted

3.3建立 OpenEBS 的 namespace,OpenEBS 相關資源將建立在這個 namespace 下

$ kubectl create ns openebs
A.若集羣已安裝了 Helm,可經過 Helm 命令來安裝 OpenEBS
helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
B.還能夠經過 kubectl 命令安裝
$ kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml

[root@k8s-node1 k8s]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0   #這裏我使用helm安裝
NAME:   openebs
LAST DEPLOYED: Sat Jun 27 11:12:58 2020
NAMESPACE: openebs
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME     AGE
openebs  0s

==> v1/ClusterRoleBinding
NAME     AGE
openebs  0s

==> v1/ConfigMap
NAME                DATA  AGE
openebs-ndm-config  1     0s

==> v1/DaemonSet
NAME         DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
openebs-ndm  3        3        0      3           0          <none>         0s

==> v1/Deployment
NAME                         READY  UP-TO-DATE  AVAILABLE  AGE
openebs-admission-server     0/1    0           0          0s
openebs-apiserver            0/1    0           0          0s
openebs-localpv-provisioner  0/1    1           0          0s
openebs-ndm-operator         0/1    1           0          0s
openebs-provisioner          0/1    1           0          0s
openebs-snapshot-operator    0/1    1           0          0s

==> v1/Pod(related)
NAME                                          READY  STATUS             RESTARTS  AGE
openebs-admission-server-5cf6864fbf-5fs2h     0/1    ContainerCreating  0         0s
openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s

==> v1/Service
NAME                TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)   AGE
openebs-apiservice  ClusterIP  10.96.7.135  <none>       5656/TCP  0s

==> v1/ServiceAccount
NAME     SECRETS  AGE
openebs  1        0s


NOTES:
The OpenEBS has been installed. Check its status by running:
$ kubectl get pods -n openebs

For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or
use one of the default storage classes provided by OpenEBS.

Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample
PVC spec using `openebs-jiva-default` StorageClass is given below:"

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: demo-vol-claim
spec:
  storageClassName: openebs-jiva-default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G
---

For more information, please visit http://docs.openebs.io/.

Please note that, OpenEBS uses iSCSI for connecting applications with the
OpenEBS Volumes and your nodes should have the iSCSI initiator installed.

3.4安裝 OpenEBS 後將自動建立 4 個 StorageClass,查看建立的 StorageClass

[root@k8s-node1 ~]# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  18m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  18m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  18m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  18m
###查看pod狀態
[root@k8s-node1 ~]# kubectl get pods --all-namespaces
NAMESPACE       NAME                                             READY   STATUS    RESTARTS   AGE
default         nginx-ingress-controller-5989bf7f8f-p7rck        0/1     Running   2          49m
default         nginx-ingress-default-backend-5b967cf596-4s9qn   1/1     Running   1          49m
ingress-nginx   nginx-ingress-controller-s2jmb                   0/1     Running   4          87m
ingress-nginx   nginx-ingress-controller-tzhmm                   0/1     Running   3          29m
ingress-nginx   nginx-ingress-controller-wrlcm                   0/1     Running   3          87m
kube-system     coredns-7f9c544f75-bdrf5                         1/1     Running   3          122m
kube-system     coredns-7f9c544f75-ltfw6                         1/1     Running   3          122m
kube-system     etcd-k8s-node1                                   1/1     Running   3          122m
kube-system     kube-apiserver-k8s-node1                         1/1     Running   3          122m
kube-system     kube-controller-manager-k8s-node1                1/1     Running   4          122m
kube-system     kube-flannel-ds-amd64-jfjlw                      1/1     Running   6          116m
kube-system     kube-flannel-ds-amd64-vwjrd                      1/1     Running   4          117m
kube-system     kube-flannel-ds-amd64-wqbhw                      1/1     Running   5          119m
kube-system     kube-proxy-clts7                                 1/1     Running   3          117m
kube-system     kube-proxy-wnq6t                                 1/1     Running   4          116m
kube-system     kube-proxy-xjz7c                                 1/1     Running   3          122m
kube-system     kube-scheduler-k8s-node1                         1/1     Running   4          122m
kube-system     tiller-deploy-797955c678-gtblv                   0/1     Running   2          52m
openebs         openebs-admission-server-5cf6864fbf-5fs2h        1/1     Running   2          28m
openebs         openebs-apiserver-bc55cd99b-n6f67                1/1     Running   9          28m
openebs         openebs-localpv-provisioner-85ff89dd44-n26ff     1/1     Running   2          28m
openebs         openebs-ndm-8w67w                                1/1     Running   3          28m
openebs         openebs-ndm-jj2vh                                1/1     Running   3          28m
openebs         openebs-ndm-operator-87df44d9-6lbcx              0/1     Running   3          28m
openebs         openebs-ndm-s8mbd                                1/1     Running   3          28m
openebs         openebs-provisioner-7f86c6bb64-56cmf             1/1     Running   2          28m
openebs         openebs-snapshot-operator-54b9c886bf-68nsf       2/2     Running   1          28m

3.5將 openebs-hostpath設置爲 默認的 StorageClass:

[root@k8s-node1 ~]# kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/openebs-hostpath patched
[root@k8s-node1 ~]# kubectl get sc
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  21m
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  21m
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  21m
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  21m

3.6將 master 節點 Taint 加上,避免業務相關的工做負載調度到 master 節點搶佔 master 資源

[root@k8s-node1 ~]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule
node/k8s-node1 tainted
  • 至此前提條件已經知足,下面開始安裝kubespere.

4.kubespere最小化安裝

kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml
[root@k8s-node1 k8s]# kubectl apply -f kubespere-mini.yaml 
namespace/kubesphere-system created
configmap/ks-installer created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
[root@node02 ~]#   kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f   #查看滾動日誌
相關文章
相關標籤/搜索