CentOS 7.4 安裝 K8S v1.11.0 集羣所遇到的問題

0.引言

最近打算將現有項目的 Docker 部署到阿里雲上面,可是以前是單機部署,如今阿里雲上面有 3 臺機器,因此想作一個 Docker 集羣。以前考慮是用 Docker Swarm 來作這個事情的,不事後面看了一下如今 K8S 用的比較多,進而想在這三臺機器上部署 K8S 集羣。html

下面附上 Kubernetes 介紹:node

Kubernetes 是 Google 團隊發起的開源項目,它的目標是管理跨多個主機的容器,提供基本的部署,維護以及運用伸縮,主要實現語言爲 Go 語言。Kubernetes 是:linux

  • 易學:輕量級,簡單,容易理解
  • 便攜:支持公有云,私有云,混合雲,以及多種雲平臺
  • 可拓展:模塊化,可插拔,支持鉤子,可任意組合
  • 自修復:自動重調度,自動重啓,自動複製

看上去很牛掰的樣子,下面咱們就開始來部署吧。git

1.準備工做

萬事開頭難,原本若是沒牆的話就沒有這麼多破事,首先咱們要先配置好安裝 Kubernetes 所須要的必備環境,這裏我沒有采用從零開始安裝 Kubernetes 的方式,而是使用了 Kubeadm 來進行 K8S 集羣的安裝與配置。github

1.1 安裝 Docker-CE

關於如何在 CentOS 安裝 Docker-CE 的文章你們能夠看一下我 這篇文章 ,幾分鐘的事情就能夠安裝完畢。docker

1.2 安裝 Kubeadm

安裝 Kubeadm 首先咱們要配置好阿里雲的國內源,執行以下命令:shell

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

以後呢,執行如下命令來重建 Yum 緩存:json

yum -y install epel-release
yum clean all
yum makecache

下面就開始正式安裝 Kubeadm 了:bootstrap

yum -y install kubelet kubeadm kubectl kubernetes-cni

不出意外的話,安裝完成以後,咱們執行如下命令來啓用 Kubeadm 服務:api

systemctl enable kubelet && systemctl start kubelet

1.3 配置 Kubeadm 所用到的鏡像

這裏是重中之重,由於在國內的緣由,沒法訪問到 Google 的鏡像庫,因此咱們須要執行如下腳原本從 Docker Hub 倉庫中獲取相同的鏡像,而且更改 TAG 讓其變成與 Google 拉去鏡像一致。

新建一個 Shell 腳本,填入如下代碼以後保存。

#!/bin/bash
images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0
etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
for imageName in ${images[@]} ; do
docker pull keveon/$imageName
docker tag keveon/$imageName k8s.gcr.io/$imageName
docker rmi keveon/$imageName
done
# 我的新加的一句,V 1.11.0 必加
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

注:這裏我就遇到過一個坑,原做者是根據 1.10 來的,而後在 kubeadm init 執行的時候一直報錯,說找不到鏡像。以後鏡像版本是下載對了,但仍是在 [init] this might take a minute or longer if the control plane images have to be pulled 這一句卡住,在國外的 VPS 測試以後,發現多了一個 k8s.gcr.io/pause:3.1 鏡像,他的 ID 其實與 pause-amd64:3.1 同樣,而後加了一個新的 TAG 以後,正常部署。

保存以後記得用 chmod 命令賦予 Shell 腳本可執行權限:

chmod -R 777 ./xxx.sh

1.4 關閉 Swap

sudo swapoff -a
#要永久禁掉swap分區,打開以下文件註釋掉swap那一行 
# sudo vi /etc/fstab

1.5 關閉 SELinux

# 臨時禁用selinux
# 永久關閉 修改/etc/sysconfig/selinux文件設置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
# 這裏按回車,下面是第二條命令
setenforce 0

1.6 配置轉發參數

# 配置轉發相關參數,不然可能會出錯
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
# 這裏按回車,下面是第二條命令
sysctl --system

2.【主機】正式安裝 Kuberentes

若是你作好了準備工做,後面的一切都是小菜一碟。

2.1 初始化相關鏡像

要初始化鏡像,請運行如下命令:

kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16

前面是版本號,後面是你 POD 網絡的 IP 段。

執行以後,你大概會獲得與我相近的輸出:

I0712 10:46:30.938979   13461 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0712 10:46:30.961005   13461 kernel_validator.go:81] Validating kernel version
I0712 10:46:30.961061   13461 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
    [WARNING Hostname]: hostname "g2-apigateway" could not be reached
    [WARNING Hostname]: hostname "g2-apigateway" lookup g2-apigateway on 100.100.2.138:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [g2-apigateway kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.8.62]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [g2-apigateway localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [g2-apigateway localhost] and IPs [172.16.8.62 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.001672 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node g2-apigateway as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node g2-apigateway as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "g2-apigateway" as an annotation
[bootstraptoken] using token: o337m9.ceq32wg9g2gro7gx
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.16.8.62:6443 --token o337m9.ceq32wg9g2gro7gx --discovery-token-ca-cert-hash sha256:e8adc6dc2bbe6bd18569c73e4c0468b4652655e7c5c97209a9ec214beac55ea3

2.2 配置 kubectl 認證信息

export KUBECONFIG=/etc/kubernetes/admin.conf
# 若是你想持久化的話,直接執行如下命令【推薦】
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

2.3 安裝 Flannel 網絡

請依次執行如下命令:

mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{
「name」: 「cbr0」,
「type」: 「flannel」,
「delegate」: {
「isDefaultGateway」: true
}
}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

最後,咱們須要新建一個 flannel.yml 文件,內容以下:

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.9.1-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conf
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.9.1-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

執行:

kubectl create -f ./flannel.yml

執行完成以後,咱們能夠運行一下命令,查看如今的節點信息:

kubectl get nodes

會獲得相似於下面的輸出:

NAME               STATUS    ROLES     AGE       VERSION
g2-master           Ready     master    46m       v1.11.0

好了,咱們主機已經配置完成。

3.【Node 節點】配置

Node 節點所須要作的都在 準備工做 裏面,作完以後直接執行剛剛主機輸出的:

kubeadm join 172.16.8.62:6443 --token o337m9.ceq32wg9g2gro7gx --discovery-token-ca-cert-hash sha256:e8adc6dc2bbe6bd18569c73e4c0468b4652655e7c5c97209a9ec214beac55ea3

執行完就 OK 了。

而後咱們回到 62 主機服務器,我剛剛在兩個從屬的服務器執行了以上命令,而後運行:

kubectl get nodes

獲得輸出:

NAME               STATUS    ROLES     AGE       VERSION
g2-master           Ready     master    46m       v1.11.0
g2-node1            Ready     <none>    41m       v1.11.0
g2-node2            Ready     <none>    41m       v1.11.0

4.Dashboard 配置

Kuberentes 配置 DashBoard 也不簡單,固然你能夠使用官方的 dashboard 的 yaml 文件進行部署,也能夠使用 Mr.Devin 這位博主所提供的修改版,避免踩坑。

地址在:https://github.com/gh-Devin/kubernetes-dashboard,將這些 Yaml 文件下載下來,在其目錄下(注意在 Yaml 文件所在目錄),執行如下命令:

kubectl  -n kube-system create -f .

啓動 Dashboard 所須要的全部容器。

訪問你 MASTER 主機的 IP:30090,能夠看到以下界面:

會發現報錯。。。看不到容器,這個時候你須要新建一個 dashboard-admin.yaml 文件,而後填充以下內容:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

填好以後呢,執行以下命令啓動容器:

kubectl -f ./dashboard-admin.yaml create

再次訪問,正常了。

5.結語

參考資料:https://www.kubernetes.org.cn/3805.html

Dashboard Web-UI 配置 :https://www.kubernetes.org.cn/3834.html

Dashboard 問題解決:https://medium.com/@osamasaad_94885/i-got-it-to-work-finally-27514babede3

相關文章
相關標籤/搜索