kubeadm安裝kubernetes 1.13.1集羣完整部署記錄

  • k8s是什麼

  Kubernetes簡稱爲k8s,它是 Google 開源的容器集羣管理系統。在 Docker 技術的基礎上,爲容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等一系列完整功能,提升了大規模容器集羣管理的便捷性。k8s是容器到容器雲後的產物。可是k8s並非萬能,並不必定適合全部的雲場景。官方有一段"What Kubernetes is not"的解釋可能更有利咱們的理解。node

  Kubernetes 不是一個傳統意義上,一應俱全的 PaaS (平臺即服務) 系統。咱們保留用戶選擇的自由,這很是重要。mysql

    •   Kubernetes 不限制支持的應用程序類型。 它不插手應用程序框架 (例如 Wildfly), 不限制支持的語言運行時 (例如 Java, Python, Ruby),只迎合符合 12種因素的應用程序,也不區分」應用程序」與」服務」。Kubernetes 旨在支持極其多樣化的工做負載,包括無狀態、有狀態和數據處理工做負載。若是應用能夠在容器中運行,它就能夠在 Kubernetes 上運行。
    •   Kubernetes 不提供做爲內置服務的中間件 (例如 消息中間件)、數據處理框架 (例如 Spark)、數據庫 (例如 mysql)或集羣存儲系統 (例如 Ceph)。這些應用能夠運行在 Kubernetes 上。
    •   Kubernetes 沒有提供點擊即部署的服務市場
    •   Kubernetes 從源代碼到鏡像都是非壟斷的。 它不部署源代碼且不構建您的應用程序。 持續集成 (CI) 工做流是一個不一樣用戶和項目都有本身需求和偏好的領域。 因此咱們支持在 Kubernetes 分層的 CI 工做流,但不指定它應該如何工做。
    •        Kubernetes 容許用戶選擇其餘的日誌記錄,監控和告警系統 (雖然咱們提供一些集成做爲概念驗證)  
    •   Kubernetes 不提供或受權一個全面的應用程序配置語言/系統 (例如 jsonnet).
    •   Kubernetes 不提供也不採用任何全面機器配置、保養、管理或自我修復系統
  •  Kubernetes的總體架構以下:

其中,控制節點,即Master節點,由三個緊密協做的獨立組件組合而成,他們分別負責是API服務的kube-apiserver、負責調度的kube-scheduler,以及負責容器編排的kube-controller-manager。整個集羣的持久化數據,則由kube-apiserver處理後保存在Etcd中。nginx

在計算節點上最核心的部分,則是一個叫作kubelet的組件git

在 Kubernetes 項目中,kubelet 主要負責github

  1.kubelet同容器運行(running containers)時(好比 Docker 項目)打交道。而這個交互所依賴的,是一個稱做 CRI(Container Runtime Interface)的遠程調用接口,這個接口定義了容器運行時的各項核心操做,好比:啓動一個容器須要的全部參數。sql

這也是爲什麼,Kubernetes 項目並不關心你部署的是什麼容器運行時、使用的什麼技術實現,只要你的這個容器運行時可以運行標準的容器鏡像,它就能夠經過實現 CRI 接入到 Kubernetes 項目當中。docker

  2.而具體的容器運行時,好比 Docker 項目,則通常經過 OCI 這個容器運行時規範同底層的 Linux 操做系統進行交互,即:把 CRI 請求翻譯成對 Linux 操做系統的調用(操做 Linux Namespace 和 Cgroups 等)。數據庫

  3.此外,kubelet 還經過 gRPC 協議同一個叫做 Device Plugin 的插件進行交互。這個插件,是 Kubernetes 項目用來管理 GPU 等宿主機物理設備的主要組件,也是基於 Kubernetes 項目進行機器學習訓練、高性能做業支持等工做必須關注的功能。json

  4.kubelet 的另外一個重要功能,則是調用網絡插件和存儲插件爲容器配置網絡和持久化存儲。這兩個插件與 kubelet 進行交互的接口,分別是 CNI(Container Networking Interface)和 CSI(Container Storage Interface)。bootstrap

因此說,kubelet徹底是爲了實現Kubernets項目對容器的管理能力而從新實現的一個組件。

 

  • Kubernetes部署

    • 安裝docker   

                     Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運行時接口。默認的容器運行時仍然是Docker,是使用kubelet中內置dockershim CRI來實現的

apt-get remove docker-ce
apt autoremove
apt-get install docker-ce
啓動docker:
systemctl enable docker
systemctl start docker
    • 安裝kubeadm,kubelet, kubectl

 kubeadm: 引導啓動k8s集羣的命令行工具。

kubelet: 在羣集中全部節點上運行的核心組件, 用來執行如啓動pods和containers等操做。

kubectl: 操做集羣的命令行工具。

首先添加apt-key:

sudo apt update && sudo apt install -y apt-transport-https curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

 

添加kubernetes源:

sudo vim /etc/apt/sources.list.d/kubernetes.list

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

 

安裝:

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

 

 

初始化Master節點

在初始化以前,咱們還有如下幾點須要注意:


1.選擇一個網絡插件,並檢查它是否須要在初始化Master時指定一些參數,好比咱們可能須要根據選擇的插件來設置--pod-network-cidr參數。參考:Installing a pod network add-on

2.kubeadm使用eth0的默認網絡接口(一般是內網IP)作爲Master節點的advertise address,若是咱們想使用不一樣的網絡接口,可使用--apiserver-advertise-address=<ip-address>參數來設置。若是適應IPv6,則必須使用IPv6d的地址,如:--apiserver-advertise-address=fd00::101。

3.1.13版本中終於解決了在國內沒法拉取國外鏡像的痛點,其增長了一個--image-repository參數,默認值是k8s.gcr.io,咱們將其指定爲國內鏡像地址:registry.aliyuncs.com/google_containers

4.咱們還須要指定--kubernetes-version參數,由於它的默認值是stable-1,會致使從https://dl.k8s.io/release/stable-1.txt下載最新的版本號,咱們能夠將其指定爲固定版本(最新版:v1.13.1)來跳過網絡請求。

 

 #kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [fn004 localhost] and IPs [121.197.130.187 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [fn004 localhost] and IPs [121.197.130.187 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [fn004 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 121.197.130.187]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.504803 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "fn004" as an annotation
[mark-control-plane] Marking the node fn004 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node fn004 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: b0x4dv.nbut63ktiaikcc24
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join [公網IP]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4

 

  

若是init出現了錯誤,須要從新init的時候,能夠 #kubeadm reset 從新初始化集羣。

接着執行:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

  

kubectl get pods --all-namespaces  //能夠看到coredns的狀態是pending,這事由於咱們尚未安裝網絡插件

 

   

Calico是一個純三層的虛擬網絡方案,Calico 爲每一個容器分配一個 IP,每一個 host 都是 router,把不一樣 host 的容器鏈接起來。與 VxLAN 不一樣的是,Calico 不對數據包作額外封裝,不須要 NAT 和端口映射,擴展性和性能都很好。

默認狀況下,Calico網絡插件使用的的網段是192.168.0.0/16,在init的時候,咱們已經經過--pod-network-cidr=192.168.0.0/16來適配Calico,固然你也能夠修改calico.yml文件來指定不一樣的網段。

可使用以下命令命令來安裝Canal插件:

 

安裝calico網絡組件

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

 

上圖出現了拉取鏡像失敗的狀況,能夠經過systemctl status kubelet 查看報錯緣由,正確的結果以下:

NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   calico-node-wdgl5                   2/2     Running   0          90s
kube-system   coredns-78d4cf999f-jvxv9            1/1     Running   0          27m
kube-system   coredns-78d4cf999f-lmhdj            1/1     Running   0          27m
kube-system   etcd-fn004                      1/1     Running   0          26m
kube-system   kube-apiserver-fn004            1/1     Running   0          26m
kube-system   kube-controller-manager-fn004   1/1     Running   0          26m
kube-system   kube-proxy-rkzkc                    1/1     Running   0          27m
kube-system   kube-scheduler-fn004            1/1     Running   0          27m

 

以上就部署完了一個master節點,接下來就能夠加入worker節點並進行測試了。

Master隔離

默認狀況下,因爲安全緣由,集羣並不會將pods部署在Master節點上。可是在開發環境下,咱們可能就只有一個Master節點,這時可使用下面的命令來解除這個限制:

kubectl taint nodes --all node-role.kubernetes.io/master-

加入worker節點

登陸另一臺機器B:

直接執行:kubeadm join [masterIP]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4

root@xxxx:/etc/kubernetes# kubeadm join [master_ip]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "master_ip:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://master_ip:6443"
[discovery] Requesting info from "https://master_ip:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "master_ip:6443"
[discovery] Successfully established connection with API Server "master_ip:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "fn001" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在master節點能夠用#kubeadm token list 查看token.

等 一下子就能夠在master節點查看節點狀態:

 

 測試

首先驗證kube-apiserverkube-controller-managerkube-schedulerpod network 是否正常:

kubectl create deployment nginx --image=nginx:alpine   //部署一個nginx,包含2個pod
kubectl scale deployment nginx --replicas=2
kubectl get pods -l app=nginx -o wide //驗證nginx pod是否運行,會分配2個192.168.開頭的集羣IP
kubectl expose deployment nginx --port=80 --type=NodePort //以nodePort 方式對外提供服務
kubectl get services nginx //查看集羣外可訪問的Port

 

 

 錯誤解決:

systemctl status kubelet //報錯是由於配置文件人爲被修改了,致使重啓始終不成功。如下配置文件使用1.13的版本。供參考。
出現以下報錯:
kubelet[12305]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S
systemd[1]: kubelet.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.

 

須要檢查/etc/systemd/system/kubelet.service.d/10-kubeadm.conf  /lib/systemd/system/kubelet.service 這2個配置文件是否正確生成。正確配置以下:

vim  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  

 

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

 

vim /lib/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

 

參考連接:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

相關文章
相關標籤/搜索