k8s 1.12.1 的坑和解決方法node
gcr.io 被牆,須要 pull 本身的鏡像,而後改 tag。具體須要 pull 哪些鏡像呢,kubeadm config images 可查看
我本身 build 的都放到了 https://github.com/FingerLiu/... , 須要的話也能夠直接用裏面的腳本:git
wget -O - https://raw.githubusercontent.com/FingerLiu/k8s.gcr.io/master/pull.sh | bash
kubeadm reset 重複安裝的時候,.kube 文件夾不會清空,但 key 已經從新生成了,全部會key secret 不匹配。解決辦法是清空 .kube 目錄,而後將 /etc/kubernetes/kube-admin.json 拷貝過來github
pending,network not ready:安裝對應版本的 flannel。kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
json
默認 k8s 不容許往 master 節點裝東西,強行設置下容許:kubectl taint nodes --all node-role.kubernetes.io/master-
bash
kubelet 本身的 bug, 無視app
sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
That's because you don't have permission to deploy tiller, add an account for it:ui
kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" createdspa
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding "tiller-cluster-rule" createdcode
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment "tiller-deploy" patched
Then run below to check it :server
helm listhelm repo update