這是一次成功的安裝、勝利的安裝,是值得收藏的一次記錄。html
本文主要介紹主控節點的安裝,工做節點添加參見:node
這裏以1.9.3爲例:linux
贈人玫瑰,手有餘香。本文來源-http://www.javashuo.com/article/p-rqehlpup-bz.htmlgit
Kubernetes是一個強大的容器治理平臺。不過對於一個能夠管理大規模集羣的系統,安裝起來也並不是易事。並且因爲原始的Kubernetes相關Docker鏡像和安裝文件都託管在gcloud上,連接指向也都是沒法訪問的,所以手工操做是不可避免的。再加之Kubernetes自己也在快速的發展,各類坑和不一樣版本的差別讓成功運行Kubernetes難度更大。github
最簡單的方式是使用minikube(參見:http://www.javashuo.com/article/p-xcmzctpr-ed.html)或者Docker for Mac/Windows(參見:http://www.javashuo.com/article/p-zfwamaum-me.html),但這隻能在開發環境下使用(用於桌面級服務也頗有用),不支持多機集羣和多節點的規模伸縮。web
kubeadm是一個kubernetes的安裝工具,能夠用於快速部署kubernets集羣,但上面的問題依然存在,咱們能夠將鏡像從dockerhub上先拉下來,再改爲須要的名稱。在這以前,目前版本還須要有一些小的手工設置(目前版本Kubernetes基礎服務已經所有容器化了,之後版本安裝過程可能都所有自動化了),以下:docker
因爲目前版本兼容性限制,在Ubuntu上須要調整幾個設置:shell
下面的命令將清除現有的全部防火牆規則:json
iptables -F
確保kubelet使用的cgroup driver 與 Docker的一致。要麼使用下面的方法更新 Docker:bootstrap
cat << EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF
要麼,設置kubernetes的cgroup driver,如:kubelet 的 --cgroup-driver
標誌設置爲與 Docker 同樣(e.g. cgroupfs
)。
目前的Kubernetes 1.9.3驗證的docker版本最高爲CE 17.03,使用下面的方法安裝:
apt-get update apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - add-apt-repository \ "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \ $(lsb_release -cs) \ stable" apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
!apt-get代理的單獨設置。
$ nano /etc/apt/apt.conf Acquire::http::Proxy "http://192.168.199.99:9999"; Acquire::https::Proxy "http://192.168.199.99:9999";
sudo -E https_proxy=192.168.199.99:9999 apt install docker-ce=17.03.2~ce-0~ubuntu-xenial
!參照以下格式設置docker的獨立代理,清除全部系統代理(由於Kubernetes要訪問本地服務)。
Environment="HTTP_PROXY=http://192.168.199.99:9999/" Environment="HTTPS_PROXY=http://192.168.199.99:9999/" Environment="NO_PROXY=localhost,127.0.0.0/8"
可是,在安裝時仍是遇到奇慢無比的狀況,只能多嘗試一下。
Kubernetes的原始docker鏡像在gcloud上,即便使用代理也須要註冊、登陸,使用gcloud工具才能獲取,使用docker會致使莫名其妙的失敗(出錯信息含糊)。能夠從DockerHub上面拉取Kubernetes鏡像的複製品,而後修改docker的名稱(之後能夠經過kubeadm的配置文件來安裝,應該就能夠指定安裝源了)。
腳本以下,若是須要其它的容器鏡像能夠照此增長便可,能夠將版本號修改成本身須要的。
echo "==================================================" echo "Set proxy to http://192.168.199.99:9999..." echo "" export http_proxy=http://192.168.199.99:9999 export https_proxy=http://192.168.199.99:9999 echo "==================================================" echo "" echo "Pulling Docker Images from mirrorgooglecontainers..." echo "==>kube-apiserver:" docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.9.3 docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.9.3 gcr.io/google_containers/kube-apiserver-amd64:v1.9.3 echo "==>kube-controller-manager:" docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.9.3 docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.9.3 gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3 echo "==>kube-scheduler:" docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.9.3 docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.9.3 gcr.io/google_containers/kube-scheduler-amd64:v1.9.3 echo "==>kube-proxy:" docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.9.3 docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.9.3 gcr.io/google_containers/kube-proxy-amd64:v1.9.3 echo "==>k8s-dns-sidecar:" docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8 docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8 echo "==>k8s-dns-kube-dns:" docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8 docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8 echo "==>k8s-dns-dnsmasq-nanny:" docker pull mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 echo "==>etcd:" docker pull mirrorgooglecontainers/etcd-amd64:3.1.11 docker tag mirrorgooglecontainers/etcd-amd64:3.1.11 gcr.io/google_containers/etcd-amd64:3.1.11 echo "==>pause:" docker pull mirrorgooglecontainers/pause-amd64:3.0 docker tag mirrorgooglecontainers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0 echo finished. echo "More update, please visit: https://hub.docker.com/r/mirrorgooglecontainers" echo ""
建議將上面內容保存到getkubeimage.sh,而後運行便可。以下:
gedit getkubeimages.sh #複製、黏貼上面的腳本內容。 sudo chmod +x getkubeimages.sh sudo ./getkubeimages.sh
kubeadm是用於kubernetes安裝、維護的命令行工具。
echo "添加Kubernetes安裝源認證key:" sudo curl -sSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add echo "添加Kubernetes安裝源:" sudo echo 「deb http://apt.kubernetes.io/ kubernetes-xenial main」 > /etc/apt/sources.list.d/kubernetes.list echo "更新系統軟件包列表:" sudo apt update echo "查看Kubernetes的可用版本:" apt-cache madison kubeadm echo "安裝kubeadm 1.9.3: " apt-get install -y kubeadm=1.9.3-00
對於kubeadm 1.9.3版本。將下面的內容添加到 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件。
[Service] Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
以及將 KUBELET_CGROUP_ARGS 加到啓動參數中(該參數在本版本安裝文件中丟失)。
最後 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件以下:
[Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS
systemctl daemon-reload systemctl restart kubelet
kubeadm init --kubernetes-version=v1.9.3 --pod-network-cidr=192.168.0.0/16
或者:
kubeadm init --kubernetes-version=v1.9.3 –pod-network-cidr 10.244.0.0/16
Using CoreDNS, add:
kubeadm init --kubernetes-version=v1.9.3 –pod-network-cidr 10.244.0.0/16 --feature-gates CoreDNS=true
使用Kubeadm安裝的具體命令參考 http://www.javashuo.com/article/p-youxcash-dm.html。
kubeadm token create --print-join-command --ttl 0
#openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
按照上面完成後的提示執行:
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#查看日誌: kubectl get pods --namespace kube-system kubectl --namespace kube-system logs kube-flannel-ds-bvxd2 #上面的kube-flannel-ds-bvxd2爲flannel的pod名稱,根據get pods返回的名稱輸入。 #編輯 /etc/kubernetes/manifests/kube-controller-manager.yaml 在command節,加入: - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16 而後system restart kubelet便可。
該問題解決辦法的詳細參考:https://github.com/coreos/flannel/issues/728
默認設置主控節點不開啓任務節點功能,能夠經過命令打開,從而將主控和任務節點部署到一臺機器上。
kubectl taint nodes --all node-role.kubernetes.io/master-
使用kubeadm join命令將其它任務節點加入主控集羣。
kubeadm join –token 8dc9d8.df09161bed020a12 192.168.199.106:6443 –discovery-token-ca-cert-hash sha256:16exxx
若是安裝失敗,使用 kubeadm reset 重置安裝環境。大部分時候須要重啓操做系統再行運行kubeadm init,由於一些系統網絡服務端口已經被佔用了,並且沒法簡單地清除。
Kubernetes安裝成功,輸出信息以下所示:
root@kube:/home/supermap# kubeadm init --kubernetes-version v1.9.3 --pod-network-cidr=10.244.0.0/16[init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kube kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.199.111] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 33.501916 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kube as master by adding a label and a taint [markmaster] Master kube tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 8b2ed3.149a349e4b775985 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 8b2ed3.149a349e4b775985 192.168.199.111:6443 --discovery-token-ca-cert-hash sha256:ab69621f2117f2b283df725859724efc71c37a20f6da519237ca1dad5a72d9b2
後續操做,執行:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
而後運行 kubectl get ns 獲取命名空間(namespaces)的信息,輸出以下:
supermap@kube:~$ kubectl get ns NAME STATUS AGE default Active 22m kube-public Active 22m kube-system Active 22m
把本機做爲任務節點加入:
kubectl taint nodes --all node-role.kubernetes.io/master-
查看節點狀況:
supermap@kube:~$ kubectl get node NAME STATUS ROLES AGE VERSION kube NotReady master 28m v1.9.3
如今,本機同時運行了主控和一個任務節點。
查看節點詳細狀況:
#顯示pods列表。 kubectl get pods --namespace=kube-system -o wide #獲得pod的詳細信息,其中kube-dns-6f4fd4bdf-895jh爲節點名稱。 kubectl get -o json --namespace=kube-system pods/kube-dns-6f4fd4bdf-895jh
後續安裝dns、dashboard、helm時還須要下面幾個鏡像,架上代理拉下來,或者到hub.docker.com上找到pull下來,在docker tag成相應的名字,不然kubectl get pods會顯示一直處於pending狀態。
docker pull quay.io/coreos/flannel:v0.10.0-amd64 docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker pull gcr.io/kubernetes-helm/tiller:v2.8.1
參考:
映射dashboard的端口,讓外面能夠訪問:
kubectl port-forward kubernetes-dashboard-7798c48646-wkgk4 8443:8443 --namespace=kube-system &
如今,能夠打開瀏覽器,輸入 http://localhost:8443查看kubernetes的運行狀況。
大功告成。