Kubernetes及Dashboard詳細安裝配置(Ubuntu14.04)

前些日子部門計劃搞並行開發,須要對開發及測試環境進行隔離,因此打算用kubernetes對docker容器進行版本管理,搭建了下Kubernetes集羣,過程以下:node

 本流程使用了阿里雲加速器,配置流程自行百度。nginx

系統設置(Ubuntu14.04):git

  禁用swap:github

    sudo swapoff -adocker

  禁用防火牆:express

    $ systemctl stop firewalldapache

    $ systemctl disable firewalldubuntu

  禁用SELINUX:api

    $ setenforce 0bash

首先在每一個節點上安裝Docker:

  apt-get update && apt-get install docker.io

  (若是apt-get update命令報 Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/ 相似錯誤,解決方法以下:

    sudo pkill -KILL appstreamcli

    wget -P /tmp https://launchpad.net/ubuntu/+archive/primary/+files/appstream_0.9.4-1ubuntu1_amd64.deb https://launchpad.net/ubuntu/+archive/primary/+files/libappstream3_0.9.4-1ubuntu1_amd64.deb

    sudo dpkg -i /tmp/appstream_0.9.4-1ubuntu1_amd64.deb /tmp/libappstream3_0.9.4-1ubuntu1_amd64.deb)

而後在全部節點上安裝kubelet kubeadm kubectl三個組件:

  kubelet 運行在 Cluster 全部節點上,負責啓動 Pod 和容器。

  kubeadm 用於初始化 Cluster。

  kubectl 是 Kubernetes 命令行工具。經過 kubectl 能夠部署和管理應用,查看各類資源,建立、刪除和更新各類組件。

  編輯源:

    sudo vi /etc/apt/sources.list

  添加kubeadm及kubernetes組件安裝源:

    deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

  更新源:

    更新源:apt-get update

  強制安裝kubeadm,kubectl,kubelet軟件包(因爲最新版沒有鏡像,改成安裝1.10.0版本):

    apt-get install kubelet=1.10.0-00

    apt-get install kubeadm=1.10.0-00

    apt-get install kubectl=1.10.0-00

 

拉取k8s所需鏡像(防止初始化時被牆,我在阿里鏡像庫上放置了所需鏡像),建立爲腳本執行:

  touch docker.sh

  chmod 755 docker.sh

  編輯腳本,添加以下:

  # 安裝鏡像

  url= registry.cn-hangzhou.aliyuncs.com

   #基礎核心

  docker pull $url/sach-k8s/etcd-amd64:3.1.12

  docker pull $url/sach-k8s/kube-apiserver-amd64:v1.10.0

  docker pull $url/sach-k8s/kube-scheduler-amd64:v1.10.0

  docker pull $url/sach-k8s/kube-controller-manager-amd64:v1.10.0

  #網絡

  docker pull $url/sach-k8s/flannel:v0.10.0-amd64

  docker pull $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8

  docker pull $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8

  docker pull $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8

  docker pull $url/sach-k8s/pause-amd64:3.1

  docker pull $url/sach-k8s/kube-proxy-amd64:v1.10.0

  #dashboard

  docker pull $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3

  #heapster

  docker pull $url/sach-k8s/heapster-influxdb-amd64:v1.3.3

  docker pull $url/sach-k8s/heapster-grafana-amd64:v4.4.3

  docker pull $url/sach-k8s/heapster-amd64:v1.4.2

  #ingress

  docker pull $url/sach-k8s/nginx-ingress-controller:0.15.0

  docker pull $url/sach-k8s/defaultbackend:1.4

   #還原kubernetes使用的鏡像

  docker tag $url/sach-k8s/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 

  docker tag $url/sach-k8s/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 

  docker tag $url/sach-k8s/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 

  docker tag $url/sach-k8s/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 

  docker tag $url/sach-k8s/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 

  docker tag $url/sach-k8s/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 

  docker tag $url/sach-k8s/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 

  docker tag $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 

  docker tag $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 

  docker tag $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8

  docker tag $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 

  docker tag $url/sach-k8s/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 

  docker tag $url/sach-k8s/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 

  docker tag $url/sach-k8s/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2 

  docker tag $url/sach-k8s/defaultbackend:1.4 gcr.io/google_containers/defaultbackend:1.4

  docker tag $url/sach-k8s/nginx-ingress-controller:0.15.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0

  #刪除多餘鏡像

  #基礎核心

  docker rmi $url/sach-k8s/etcd-amd64:3.1.12

  docker rmi $url/sach-k8s/kube-apiserver-amd64:v1.10.0

  docker rmi $url/sach-k8s/kube-scheduler-amd64:v1.10.0

  docker rmi $url/sach-k8s/kube-controller-manager-amd64:v1.10.0

  #網絡

  docker rmi $url/sach-k8s/flannel:v0.10.0-amd64

  docker rmi $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8

  docker rmi $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8

  docker rmi $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8

  docker rmi $url/sach-k8s/pause-amd64:3.1

  docker rmi $url/sach-k8s/kube-proxy-amd64:v1.10.0

  #dashboard

  docker rmi $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3

  #heapster

  docker rmi $url/sach-k8s/heapster-influxdb-amd64:v1.3.3

  docker rmi $url/sach-k8s/heapster-grafana-amd64:v4.4.3

  docker rmi $url/sach-k8s/heapster-amd64:v1.4.2

  #ingress

  docker rmi $url/sach-k8s/nginx-ingress-controller:0.15.0

  docker rmi $url/sach-k8s/defaultbackend:1.4

  

  執行腳本:

    ./docker.sh

每一個節點都拉取完成全部鏡像後,在主節點上初始化集羣:

  初始化:

    kubeadm init --kubernetes-version=v1.10.0  --apiserver-advertise-address 192.168.254.128 --pod-network-cidr=10.244.0.0/16

  配置kubectl:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

  若是是root用戶:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  若是不是root用戶:

    echo "source <(kubectl completion bash)" >> ~/.bashrc

  初始化網絡方案(本文采用flannel方案):

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

  查看節點狀態:

    Kubectl get nodes

    (主節點狀態爲notready,因爲沒有調度pod)

  主節點調度pod:

    kubectl taint nodes --all node-role.kubernetes.io/master-

   查看日誌:

    journalctl -xeu kubelet 或journalctl -xeu kubelet > a

   查看pod 狀態:

    kubectl get pods --all-namespaces

從節點加入集羣:

  kubeadm join --token q500wd.kcjrb2zwvhwqt7su 192.168.254.128:6443 --discovery-token-ca-cert-hash sha256:29e091cca420e505d0c5e091e68f6b5c4ba3f2a54fdcd693c681307c8a041a8b

   (token和證書都是集羣初始化後輸出在主節點控制檯的,在主節點輸出中查找token 和discovery-token-ca-cert-hash)

   查看集羣節點:

    kubectl get nodes

  若是從節點加入失敗則多是token過時,查看日誌:

    kubelet logs

  查看token是否過時:

    kubeadm token list

  若是過時生成新的token和證書:

    kubeadm token create

    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

    0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538

    刪除以前證書:

      rm -rf  /etc/kubernetes/pki/ca.crt

    從節點狀態重置:

      kubeadm reset

    完成上述步驟後從新加入集羣便可。

    (在從節點上執行相關命令報錯,如:

      kubectl get nodes

      報The connection to the server localhost:8080 was refused - did you specify the right host or port?錯誤,因爲配置文件沒有應用,解決以下:

        sudo cp /etc/kubernetes/kubelet.conf $HOME/

        sudo chown $(id -u):$(id -g) $HOME/kubelet.conf

        export KUBECONFIG=$HOME/kubelet.conf)

部署應用:

  建立pod:

    kubectl run nginx --replicas=1 --labels="run=load-balancer-example" --image=nginx  --port=80

    (replicas配置副本數)

  查詢部署相關信息:

    kubectl get deployments nginx 

    kubectl describe deployments nginx

    kubectl get replicasets

    kubectl describe replicasets

  建立service:

    kubectl expose deployment nginx --type=NodePort --name=example-service

  查看service:

    kubectl describe services example-service

  (service若是外網沒法訪問,緣由每每只是Service的selector的值沒有和pod匹配,這種錯誤很容易經過查看service的endpoints信息來驗證,若是endpoints爲空,就說明selector的值配錯了。只須要修改成對應pod的標籤就能夠了。參考博文https://blog.csdn.net/bluishglc/article/details/52440312)

  

ps:

      服務器重啓後集羣沒法訪問,如kubectl get pods命令報錯時,須要從新禁用swap,重啓kubelet:

    sudo swapoff -a

    systemctl daemon-reload

    sytemctl restart kubelet

   kubelet 狀態查看:

    systemctl status kubelet

  kubelet 日誌:

    journalctl -xefu kubelet

  查看swap狀態:

    cat /proc/swaps

 

安裝Dashboard :

  建立配置文件:

    touch kubernetes-dashboard.yaml

  添加以下配置:

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

 

# Configuration to deploy release version of the Dashboard UI compatible with

# Kubernetes 1.8.

#

# Example usage: kubectl create -f <this_file>

 

# ------------------- Dashboard Secret ------------------- #

 

apiVersion: v1

kind: Secret

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-certs

  namespace: kube-system

type: Opaque

 

---

# ------------------- Dashboard Service Account ------------------- #

 

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Role & Role Binding ------------------- #

 

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

rules:

  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

  resources: ["secrets"]

  verbs: ["create"]

  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  verbs: ["create"]

  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

  resources: ["secrets"]

  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

  verbs: ["get", "update", "delete"]

  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  resourceNames: ["kubernetes-dashboard-settings"]

  verbs: ["get", "update"]

  # Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

  resources: ["services"]

  resourceNames: ["heapster"]

  verbs: ["proxy"]

- apiGroups: [""]

  resources: ["services/proxy"]

  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

  verbs: ["get"]

 

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Deployment ------------------- #

 

kind: Deployment

apiVersion: apps/v1beta2

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  replicas: 1

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      k8s-app: kubernetes-dashboard

  template:

    metadata:

      labels:

        k8s-app: kubernetes-dashboard

    spec:

      containers:

      - name: kubernetes-dashboard

        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

        ports:

        - containerPort: 8443

          protocol: TCP

        args:

          - --auto-generate-certificates

          # Uncomment the following line to manually specify Kubernetes API server Host

          # If not specified, Dashboard will attempt to auto discover the API server and connect

          # to it. Uncomment only if the default does not work.

          # - --apiserver-host=http://my-address:port

        volumeMounts:

        - name: kubernetes-dashboard-certs

          mountPath: /certs

          # Create on-disk volume to store exec logs

        - mountPath: /tmp

          name: tmp-volume

        livenessProbe:

          httpGet:

            scheme: HTTPS

            path: /

            port: 8443

          initialDelaySeconds: 30

          timeoutSeconds: 30

      volumes:

      - name: kubernetes-dashboard-certs

        secret:

          secretName: kubernetes-dashboard-certs

      - name: tmp-volume

        emptyDir: {}

      serviceAccountName: kubernetes-dashboard

      # Comment the following tolerations if Dashboard must not be deployed on master

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

 

---

# ------------------- Dashboard Service ------------------- #

 

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  ports:

    - port: 443

      targetPort: 8443

  selector:

k8s-app: kubernetes-dashboard

 

---

 

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: kubernetes-dashboard

  labels:

    k8s-app: kubernetes-dashboard

subjects:

  - kind: ServiceAccount

    name: kubernetes-dashboard

    namespace: kube-system

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

 

  安裝:

    kubectl apply -f kubernetes-dashboard.yaml

  修改服務類別,使能夠外網訪問:

    kubectl -n kube-system edit service kubernetes-dashboard

  修改以下圖:

   查看修改後service:

    kubectl get services kubernetes-dashboard -n kube-system

   Dashboard經過nodeIp:[暴露端口]能夠訪問,頁面打開後選擇token登陸。

  Token獲取方式:

    kubectl -n kube-system get secret

  (找到Name爲kubernetes-dashboard-token-XXXXX名稱(此處爲47psh))

    kubectl -n kube-system describe secret kubernetes-dashboard-token-47psh

    

  將token複製到頁面中,進入面板。

 

至此,所有安裝完成,能夠進入面板對kubernetes集羣進行管理。

相關文章
相關標籤/搜索