Kubernetes是一個基於Docker的集羣管理系統,如今也能夠在ARM集羣上運行。這裏介紹基於HypriotOS操做系統的Kubernetes安裝和使用過程(基於樹莓派)。也能夠在多種基於ARMbian操做系統 (https://www.armbian.com/)的ARM板上運行(參見 https://www.armbian.com/download/)。node
硬件上, 至少兩個Raspberry Pis 可以相互鏈接,而且鏈接到Internet。nginx
首先, 咱們須要一個操做系統,下載和燒寫 HypriotOS。最快速的方式是使用 flash tool ,以下:git
flash --hostname node01 https://github.com/hypriot/image-builder-rpi/releases/download/v1.4.0/hypriotos-rpi-v1.4.0.img.zip
對全部的樹莓派執行上面的操做,而後啓動。github
而後, SSH進入Raspberry Pis:web
ssh pirate@node01.local
首次啓動密碼爲 hypriot
。api
須要root 權限,以root帳戶進入系統,以下:網絡
sudo su -
爲了安裝Kubernetes和依賴軟件, 須要執行一些命令。首先, 安裝kubernetes APT 倉庫的key,添加軟件安裝源:app
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
… 安裝 kubeadm
到全部節點:負載均衡
$ apt-get update && apt-get install -y kubeadm
上面的操做結束後, 初始化 Kubernetes ,在master node 使用:ssh
$ kubeadm init --pod-network-cidr 10.244.0.0/16
這裏的命令,添加 --pod-network-cidr
很重要!由於咱們將使用 flannel 虛擬網絡,關於 flannel 的注意事項以下,使用以前務必瞭解:
Some notes about flannel: We picked flannel here because that’s the only available solution for ARM at the moment (this is subject to change in the future though).
flannel can use and is using in this example the Kubernetes API to store metadata about the Pod CIDR allocations, and therefore we need to tell Kubernetes first which subnet we want to use. The subnet we chose here is somehow fixed, because the flannel configuration file that we’ll use later in this guide predefines the equivalent subnet. Of course, you can adapt both.
若是經過 WIFI鏈接而非有線, 添加 --apiserver-advertise-address=<wifi-ip-address>
做爲參數來執行 kubeadm init
,以經過WiFi公佈Kubernetes’ API。 還有一些其餘的 kubeadm init
參數,你能夠去嘗試。
當 Kubernetes 初始化後, 終端窗口顯示以下:
爲了啓動集羣, 須要運行 (as a regular user):
$ sudo cp /etc/kubernetes/admin.conf $HOME/ $ sudo chown $(id -u):$(id -g) $HOME/admin.conf $ export KUBECONFIG=$HOME/admin.conf
下一步, 如上面的輸出所說明, 經過 kubeadm join
命令添加集羣的節點。如(在節點機上執行):
$ kubeadm join --token=bb14ca.e8bbbedf40c58788 192.168.0.34
幾秒鐘後,你將在 master node上看得見全部的節點,經過執行下面的命令:
$ kubectl get nodes
終端顯示的信息以下:
最後, 咱們須要設置flannel v0.7.1 ,做爲Pod network driver. 不要使用 v0.8.0 ,由於有一個已知的 bug ,將會引發 CrashLoopBackOff
錯誤。在 master node 執行:
$ curl -sSL https://rawgit.com/coreos/flannel/v0.7.1/Documentation/kube-flannel-rbac.yml | kubectl create -f - $ curl -sSL https://rawgit.com/coreos/flannel/v0.7.1/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
終端顯示的信息以下:
而後等待 flannel 和其餘 cluster-internal Pods 的運行 Running
,查看運行狀況:
$ kubectl get po --all-namespaces
很好,看起來都在 Running
:
如今Kubernetes已經設置成功! 下一步, 咱們在集羣上來實際啓動一個服務。
啓動一個簡單的服務,驗證下集羣是否運行正常,以下:
$ kubectl run hypriot --image=hypriot/rpi-busybox-httpd --replicas=3 --port=80
該命令啓動名爲 hypriot的服務,鏡像來自於 hypriot/rpi-busybox-httpd ,端口爲 80。該服務的副本設爲3,將啓動3個容器實例。
下一步,暴露建立的部署後的Pods爲穩定的name 和 IP的服務:
$ kubectl expose deployment hypriot --port 80
好了! 如今檢查想要的容器是否啓動和運行:
$ kubectl get endpoints hypriot
將看到三個endpoints (= containers) like this:
使用curl 檢查服務service是否已經起來:
服務響應返回的HTML如上,很好!下一步,咱們將從集羣外部來訪問這個服務。
咱們將使用 Ingress Controller示範例程,來管理外部的輸入請求,實現服務的訪問。以及,使用 Traefik 來進行負載均衡。若是但願倆節 Ingress 和 Traefik的更多內容,建議閱讀下面的內容:
In contrast to Docker Swarm, Kubernetes itself does not provide an option to define a specific port that you can use to access a service. According to Lucas this is an important design decision; routing of incoming requests should be handled by a third party, such as a load balancer or a webserver, but not by the core product. The core Kubernetes should be lean and extensible, and encourage others to build tools on top of it for their specific needs.
Regarding load balancers in front of a cluster, there is the Ingress API object and some sample Ingress Controllers. Ingress is a built-in way of exposing Services to the outside world via an Ingress Controller that anyone can build. An Ingress rule defines how traffic should flow from the node the Ingress controller runs on to services inside of the cluster.
首先,部署traefik做爲負載均衡器:
$ kubectl apply -f https://raw.githubusercontent.com/hypriot/rpi-traefik/master/traefik-k8s-example.yaml
Label the node you want to be the load balancer. Then the Traefik Ingress Controller will land on the node you specified. Run:
$ kubectl label node <load balancer-node> nginx-controller=traefik
Lastly, create an Ingress object that makes Traefik load balance traffic on port 80
to the hypriot
service:
$ cat > hypriot-ingress.yaml <<EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hypriot spec: rules: - http: paths: - path: / backend: serviceName: hypriot servicePort: 80 EOF $ kubectl apply -f hypriot-ingress.yaml
Visit the loadbalancing node’s IP address in your browser and you should see a nice web page:
If you don’t see a website there yet, run:
$ kubectl get pods
… and make sure all hypriot Pods are in the Running
state.
Wait until you see that all Pods are running, and a nice Hypriot website should appear!
If you wanna reset the whole cluster to the state after a fresh install, just run this on each node:
$ kubeadm reset
In addition, it is recommended to delete some additional files as it is mentioned here.
The dashboard is a wonderful interface to visualize the state of the cluster. Start it with:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-arm.yaml
Edit the kubernetes-dashboard service to use type: ClusterIP
to type: NodePort
, see Accessing Kubernetes Dashboard for more details.
$ kubectl -n kube-system edit service kubernetes-dashboard
The following command provides the port that the dashboard is exposed at on every node with the NodePort function of Services, which is another way to expose your Services to the outside of your cluster:
$ kubectl -n kube-system get service kubernetes-dashboard -o template --template="{{ (index .spec.ports 0).nodePort }}" | xargs echo
Then you can checkout the dashboard on any node’s IP address on that port! Make sure to use https
when accessing the dashboard, for example if running on port 31657
access it at https://node:31657
.
Newer versions of the Kubernetes Dashboard require either a Kubeconfig
or Token
to view information on the dashboard. Bearer tokens are recommended to setup proper permissions for a user, but to test the replicaset-controller-token
Token may be used to test.
kubectl -n kube-system describe secret `kubectl -n kube-system get secret | grep replicaset-controller-token | awk '{print $1}'` | grep token: | awk '{print $2}'
It was our goal to show that Kubernetes indeed works well on ARM (and ARM 64-bit!). For more examples including the AMD64 platform, check out the official kubeadm documentation.
We might follow-up this blog post with a more in-depth post about the current and planned state of Kubernetes officially on ARM and more, so stay tuned and tell Lucas if that’s something you’re interested in reading.
As always, use the comments below to give us feedback and share this post on Twitter, Google or Facebook.
Mathias Renner and Lucas Käldström
原文(英):http://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/