新版本的kubeadm已經改變了一些參數,安裝方法有變,能夠參考新版本安裝方式node
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
經過以上repo安裝kubeadm kubectl命令以及kubelet服務,master上的kubelet服務是用來啓動靜態pod,master的全部組件也是靜態pod linux
kubeadm init [flags]
注意配置文件中的配置項 kubernetesVersion 或者命令行參數 --kubernetes-version 會影響到鏡像的版本 git
若是要在沒有網絡的狀況下運行 kubeadm,你須要預先拉取所選版本的主要鏡像: Image Name v1.10 release branch version k8s.gcr.io/kube-apiserver-${ARCH} v1.10.x k8s.gcr.io/kube-controller-manager-${ARCH} v1.10.x k8s.gcr.io/kube-scheduler-${ARCH} v1.10.x k8s.gcr.io/kube-proxy-${ARCH} v1.10.x k8s.gcr.io/etcd-${ARCH} 3.1.12 k8s.gcr.io/pause-${ARCH} 3.1 k8s.gcr.io/k8s-dns-sidecar-${ARCH} 1.14.8 k8s.gcr.io/k8s-dns-kube-dns-${ARCH} 1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-${ARCH} 1.14.8 coredns/coredns 1.0.6 此處 v1.10.x 意爲 「v1.10 分支上的最新的 patch release」。 ${ARCH} 能夠是如下的值: amd64, arm, arm64, ppc64le 或者 s390x。 若是你運行1.10或者更早版本的 Kubernetes,而且你設置了 --feature-gates=CoreDNS=true, 你必須也使用 coredns/coredns 鏡像, 而不是使用三個 k8s-dns-* 鏡像。 在 Kubernetes 1.11版本以及以後的版本中,你可使用 kubeadm config images 的子命令來列出和拉取相關鏡像: kubeadm config images list kubeadm config images pull 從 Kubernetes 1.12 版本開始, k8s.gcr.io/kube-*、 k8s.gcr.io/etcd 和 k8s.gcr.io/pause 鏡像再也不要求 -${ARCH} 後綴。
master節點github
端口 | 做用 | 使用者 |
---|---|---|
6443* | apiserver | all |
2379-2380 | etcd | apiserver,etcd |
10250 | kubelet | self,control,plane |
10251 | kube-scheduler | self |
10252 | kube-controller-manager | self |
worker節點 docker
端口 | 做用 | 使用者 |
---|---|---|
10250 | kubelet | self,control,plane |
30000-32767 | nodeport service** | all |
調整內核參數
vim /etc/sysctl.conf添加 shell
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
關閉防火牆和selinux express
systemctl disable firewalld systemctl stop firewalld systemctl stop iptables systemctl disable iptables setenforce 0
而後修改當前內核狀態 apache
sysctl -p
將yum源換爲文章開頭的源站以後 json
yum -y install docker kubelet kubeadm ebtables ethtool
daemon.json配置:vim
{ "insecure-registries":["http://harbor.test.com"], "registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"] }
添加文件cat /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd
此時啓動kubelet是會報錯的
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
先啓動docker
systemctl daemon-reload
而後看看咱們須要哪些鏡像:
[root@host5 kubernetes]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.6
這個步驟在新版本中也不太行,由於沒有了print-default等參數。
這些鏡像因爲都在國外,因此並不能知足咱們的安裝需求,咱們須要將安裝的鏡像源換掉
咱們先生成配置文件:
kubeadm config print-defaults --api-objects ClusterConfiguration >kubeadm.conf
將配置文件中的:
imageRepository: k8s.gcr.io
換成本身的私有倉庫
imageRepository: docker.io/mirrorgooglecontainers
有時候也須要修改參數kubernetesVersion的版本,我實際安裝中不須要
而後運行命令
kubeadm config images list --config kubeadm.conf kubeadm config images pull --config kubeadm.conf kubeadm init --config kubeadm.conf
kubeadm config view
默認上邊的倉庫沒有處理coredns 的鏡像,我拉取,本地處理了
docker pull coredns/coredns:1.2.6 docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6
這樣咱們查看
[root@host5 kubernetes]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/mirrorgooglecontainers/kube-proxy v1.13.0 8fa56d18961f 3 months ago 80.2 MB docker.io/mirrorgooglecontainers/kube-apiserver v1.13.0 f1ff9b7e3d6e 3 months ago 181 MB docker.io/mirrorgooglecontainers/kube-controller-manager v1.13.0 d82530ead066 3 months ago 146 MB docker.io/mirrorgooglecontainers/kube-scheduler v1.13.0 9508b7d8008d 3 months ago 79.6 MB docker.io/coredns/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/etcd 3.2.24 3cab8e1b9802 5 months ago 220 MB docker.io/mirrorgooglecontainers/pause 3.1 da86e6ba6ca1 14 months ago 742 kB
以後咱們運行
kubeadm init --config kubeadm.conf
[addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #######網絡也要單獨運行 https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.11.90.45:6443 --token 2rau0q.1v7r64j0qnbw54ev --discovery-token-ca-cert-hash sha256:eb792e5e9f64eee49e890d8676c0a0561cb58a4b99892d22f57d911f0a3eb7f2
能夠看到須要執行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
不然咱們執行kubectl命令的時候訪問的是默認的8080端口會報錯
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@host5 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
在新的1.14.1的版本中,上面的配置不須要,將image打包到本身的私人鏡像庫以後,直接執行:
kubeadm init --image-repository harbor.test.com/k8snew
而後按照提示完成便可
由下面能夠看出有兩個pod是pending狀態,由於我沒有部署網絡組件,見上面的網絡須要單獨運行安裝
[root@host5 ~]# kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-9f7ddc475-kwmxg 0/1 Pending 0 45m kube-system pod/coredns-9f7ddc475-rjs8d 0/1 Pending 0 45m kube-system pod/etcd-host5 1/1 Running 0 44m kube-system pod/kube-apiserver-host5 1/1 Running 0 45m kube-system pod/kube-controller-manager-host5 1/1 Running 0 44m kube-system pod/kube-proxy-nnvsl 1/1 Running 0 45m kube-system pod/kube-scheduler-host5 1/1 Running 0 45m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 45m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 45m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 0/2 2 0 45m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-9f7ddc475 2 2 0 45m
接下來咱們就要部署calico網絡組件
由上面的init時候的提示咱們能夠看到:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
網絡組件的安裝能夠查看此網頁
咱們下載或者遠程執行這個yaml文件
kubectl apply -f calico.yml
..... image: harbor.test.com/k8s/cni:v3.6.0 ..... image: harbor.test.com/k8s/node:v3.6.0
如上圖將其中的image換成我下載完成的本地image,若是下載國外的鏡像下載不下來,能夠參考本人另一篇文章《默認鏡像庫改成指定》
目前版本相關網絡組件已經換成了v3.7.2
以後咱們發現以下:
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7c69b4dd88-89jnl 1/1 Running 0 6m58s kube-system calico-node-7g7gn 1/1 Running 0 6m58s kube-system coredns-f7855ccdd-p8g58 1/1 Running 0 86m kube-system coredns-f7855ccdd-vkblw 1/1 Running 0 86m kube-system etcd-host5 1/1 Running 0 85m kube-system kube-apiserver-host5 1/1 Running 0 85m kube-system kube-controller-manager-host5 1/1 Running 0 85m kube-system kube-proxy-6zbzg 1/1 Running 0 86m kube-system kube-scheduler-host5 1/1 Running 0 85m
發現全部pod都正常啓動
一樣也要關閉selinux,調整內核參數等
而後執行
kubeadm join 10.11.90.45:6443 --token 05o0eh.6andmi1961xkybcu --discovery-token-ca-cert-hash sha256:6ebbf4aeca912cbcf1c4ec384721f1043714c3cec787c3d48c7845f95091a7b5
若是是添加master節點,加上--experimental-control-plane參數
若是忘記token,使用:
kubeadm token create --print-join-command
journalctl -xe -u docker #查看docker的日誌
journalctl -xe -u kubelet -f #查看kubelet的日誌
下載image和yaml文件
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
yaml文件以下:
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: harbor.test.com/k8s/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30003 selector: k8s-app: kubernetes-dashboard
如上面,在svc裏面的訪問方式改爲了nodeport方式,映射到node的30003端口
kubectl create以後
直接訪問https://nodeip:30003便可打開dashboard界面
通過測試發現google瀏覽器不行,firefox可行
可是此時須要token才能登錄
接下來咱們生成token
admin-token.yml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
kubectl create -f 以後
kubectl get secret -n kube-system |grep admin|awk '{print $1}' kubectl describe secret admin-token-8hn52 -n kube-system|grep '^token'|awk '{print $2}' #注意此處的secret換成實際的
記下這個生成的token便可登錄
重啓服務器後,啓動kubelet服務,而後啓動全部容器
swapoff -a
docker start $(docker ps -a | awk '{ print $1}' | tail -n +2)
參考
https://github.com/gjmzj/kubeasz
鏡像使用另外一篇文章中heapster安裝的image
共有四個yaml文件 (省略)
grafana.yaml只需修改以下image
... #image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4 image: harbor.test.com/rongruixue/heapster_grafana:latest ...
heapster-rbac.yaml無需修改
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system
heapster.yaml須要添加volumes掛載和修改image以及以下
spec: serviceAccountName: heapster containers: - name: heapster #image: k8s.gcr.io/heapster-amd64:v1.5.4 image: harbor.test.com/rongruixue/heapster:latest volumeMounts: - mountPath: /srv/kubernetes name: auth - mountPath: /root/.kube name: config imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:https://kubernetes.default?inClusterConfig=false&insecure=true&auth=/root/.kube/config - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 volumes: - name: auth hostPath: path: /srv/kubernetes - name: config hostPath: path: /root/.kube
說明:
inClusterConfig=false : 不使用service accounts中的kube config信息;
insecure=true:這裏偷了個懶兒:選擇對kube-apiserver發過來的服務端證書作信任處理,即不校驗;
auth=/root/.kube/config:這個是關鍵!在不使用serviceaccount時,咱們使用auth文件中的信息來對應kube-apiserver的校驗。
不然鏈接不上apiserver的6443
注意在新版本中v.1.14.1安裝以後,用這個版本的heapster會提示沒法認證,後面改用
registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64 v1.5.4 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64 v1.5.2 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64 v5.0.4
influxdb.yaml只需修改image
spec: containers: - name: influxdb #image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 image: harbor.test.com/rongruixue/influxdb:latest
執行kubectl create -f 以後發現heapster這個pod報錯
缺乏--read-only-port=10255參數,若是沒有這個參數的話默認不打開10255端口,那麼後續在部署dashboard的時候heapster會報錯提示沒法鏈接節點的10255端口
E0320 14:07:05.008856 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://10.11.90.45:10255/stats/container/": Post http://10.11.90.45:10255/stats/container/: dial tcp 10.11.90.45:10255: getsockopt: connection refused
而查看以後發現全部kubelet都沒有開啓10255端口,對於此端口的說明以下:
參數 | 解釋 | 默認值 |
---|---|---|
–address | kubelet 服務監聽的地址 | 0.0.0.0 |
–port | kubelet 服務監聽的端口 | 10250 |
–healthz-port | 健康檢查服務的端口 | 10248 |
–read-only-port | 只讀端口,能夠不用驗證和受權機制,直接訪問 | 10255 |
故此,咱們要開放此只讀端口
咱們在啓動每臺機器上的kubelet服務時
須要在配置文件里加上此端口的啓動配置:
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--read-only-port=10255"
這樣,咱們就能夠安裝成功:
[root@host5 heapster]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% host4 60m 6% 536Mi 28% host5 181m 9% 1200Mi 64%