執行swapoff
臨時關閉swap。node
重啓後會失效,若要永久關閉,能夠編輯/etc/fstab
文件,將其中swap分區一行註釋掉linux
#/dev/mapper/centos-swap swap swap defaults 0 0
能夠參考官方安裝文檔git
$ yum install yum-utils device-mapper-persistent-data lvm2 $ yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ yum update && yum install docker-ce-18.06.2.ce
建立文件/etc/docker/daemon.json
, 寫入下面的內容。github
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] }
若在國內部署,可使用國內的docker源加快拉取速度,/etc/docker/daemon.json
中加入國內源。chrome
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "registry-mirrors": [ "https://registry.docker-cn.com" ] }
mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload systemctl restart docker
若是節點在國內,可使用國內鏡像倉庫,這裏使用了阿里雲的,建立/etc/yum.repos.d/kubernetes.repo
,文件以下內容docker
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
若是在國外,可使用谷歌官方鏡像倉庫,文件內容以下json
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube*
kubelet不支持SELinux, 這裏須要將SELinux設置爲permissive
模式centos
$ setenforce 0 $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes $ systemctl enable --now kubelet
在RHEL/CentOS 7上因爲 iptables 被繞過致使網絡請求被錯誤的路由。您得保證 在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被設爲1。api
建立文件/etc/sysctl.d/k8s.conf
, 文件內容以下瀏覽器
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
執行sysctl --system
使配置生效
若是節點在國外,能夠跳過這一步。
執行kubeadm config images pull
查看到gcr.io的鏈接,若是拉取成功能夠進入下一步。
若是失敗,說明沒法訪問gcr.io。這時須要手動拉取鏡像,能夠執行下面的腳本,從阿里雲拉取相應鏡像。
#!/bin/bash images=( kube-apiserver:v1.13.4 kube-controller-manager:v1.13.4 kube-scheduler:v1.13.4 kube-proxy:v1.13.4 pause:3.1 etcd:3.2.24 coredns:1.2.6 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done
也能夠在國外的vps上拉取相應的鏡像,執行docker save
保存爲tar文件,複製到須要的節點上,docker load
加載,而後經過docker tag
恢復鏡像的tag。
執行
kubeadm config images list
能夠查看所需鏡像
這裏使用flannel
$ yum insatll flanneld
能夠先經過kubeadm config images pull
確認鏡像拉取成功。
經過kubeadm init <args>
初始化。因爲這裏使用了flannel做爲網絡插件,在初始化時須要加入--pod-network-cidr=10.244.0.0/16
指定網絡。若是不適用默認的
kubeadm init --pod-network-cidr=10.244.0.0/16
初始化成功後,會輸出下面的信息
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
注意這裏最後一行的
kubeadm join .....
,在其餘節點執行此行命令,使節點加入到集羣中。
若是使用非root帳戶操做kubectl,則需執行以下命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
經過kubectl get nodes
查看節點,若是看到master
節點表示安裝成功。
$ kubectl get nodes NAME STATUS ROLES AGE VERSION host-168 Ready master 41h v1.13.4
只有master須要執行kubeadm init
初始化,其餘node節點經過執行kubeadm join
加入到集羣中
命令及參數採用kubeadm init
時的輸出,示例以下
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
這裏的toke有效期只有24小時,若是過時在master上執行kubeadm create token
建立新的token。
執行kubectl get nodes
查看node節點
$ kubectl get nodes NAME STATUS ROLES AGE VERSION host-167 Ready <none> 41h v1.13.4 host-168 Ready master 41h v1.13.4
集羣中的master和node節點的hostname不能重複,不然會加入集羣失敗。 若是重複,能夠經過
hostnamectl set-hostname NAME
修改節點的hostname
安裝kuberenets-dashboard後能夠經過瀏覽器查看/更改集羣
和前述相同,須要node節點拉取下面幾個鏡像。國內節點須要從阿里雲拉取或者從國外節點拉取後傳到節點再加載
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 gcr.io/google_containers/kube-proxy-amd64:v1.9.8 k8s.gcr.io/pause:3.1 gcr.io/google_containers/pause-amd64:3.0
獲取yaml配置文件
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubernetes-dashboard默認須要經過kubectl proxy
來訪問,可是這時使用的是HTTP,在HTTP下訪問dashboard時即便登陸成功,也會卡在登陸頁面。能夠選擇讓dashboard直接監聽外部端口,不通過kubectl proxy
的代理。
爲此能夠在kubernets-dashboard.yaml最後一欄,Dashboard Service
中,增長了type: NodePort
和nodePort: 30001
,更改後的結果以下所示。(這裏30001端口也能夠更改)
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard 部署dashboard
經過kubectl部署dashboard
$ kubectl apply -f kubernets-dashboard.yaml
訪問https://nodeIP:30001
,出現登陸界面表示成功。這裏的證書不被瀏覽器信任,chrome沒法訪問的話,能夠用firefox打開。
這裏爲了簡單,建立了一個admin帳戶用來登陸
新建文件dashboard-adminuser.yaml
,內容以下
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
在master上執行kubectl apply -f dashboard-adminuser.yaml
建立admin用戶及role綁定。
獲取token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
輸出以下所示
Name: admin-user-token-6gl6l Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name=admin-user kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
在dashboard登陸界面,選擇令牌,將上面輸出的token粘貼到輸入框,點擊登陸就能夠已admin登入。
kubeadm join
成功,master上看不到能夠查看node節點的hostname是否於集羣內其餘節點重複,重複的話經過hostnamectl修改
查看節點是否安裝了flannel,沒有須要安裝。
經過systemctl status kubelet查看運行日誌中的錯誤信息。
cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 17:50:00 host-166 kubelet[19833]: E0331 17:50:00.085663 19833 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
相似上面的報錯,應該是flannel沒有安裝或者找不到flannel的配置,能夠手動將master的/etc/cni/net.d/10-flannel.conflist
文件手動複製到node節點相應目錄下。若是拉取鏡像失敗,則能夠手動拉取。
確保瀏覽器經過https訪問dashboard。
執行kubeadm join時報錯以下。
[preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
這是由於kubeadm生成的token默認24小時過時,在master上執行kubeadm token create
建立新的token,替換kubeadm join --token XXX
中的token便可