yum localinstall -y kubeadm-1.9.0-0.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubelet-1.9.0-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm
修改 /etc/sysctl.conf,添加如下內容node
net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1
修改後,及時生效git
sysctl -p
kubelet和docker 的cgroup driver 有2種方式:cgroupfs和systemd.注意保持 2個應用的driver保持一致。github
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.confdocker
#修改systemd爲cgroupfs Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" #新加一行 Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=foxchan/google_containers/pause-amd64:3.0"
修改完成後bootstrap
systemctl daemon-reload
docker 啓動命令添加以下內容,能夠修改cgroup driver爲systemdvim
--exec-opt native.cgroupdriver=systemd
kubeadm init --kubernetes-version=1.9.0 --token-ttl 0
參數說明api
鏡像列表安全
若是下載本身的或者dockerhub的鏡像。能夠利用腳本,批量替換鏡像imagename網絡
docker images | sed 's/foxchan/gcr.io\/google_containers/'| awk '{print "docker tag "$3" "$1":"$2}'
安裝信息app
[root@kvm-gs242024 ~]# kubeadm init --kubernetes-version=1.9.0 --token-ttl 0 --ignore-preflight-errors=all [init] Using Kubernetes version: v1.9.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kvm-gs024 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.24] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused. [apiclient] All control plane components are healthy after 78.502690 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kvm-gs242024 as master by adding a label and a taint [markmaster] Master kvm-gs242024 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 1ac970.704ce2d03cc45382 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 1ac970.704ce2d03cc45382 192.168.0.24:6443 --discovery-token-ca-cert-hash sha256:f70f07be83a7b2af2c41752b00def4389e3019006b3be643fe1ccf1c53368043
token要記得保存,當前版本 token 沒法經過命令找回,不然沒法添加node
[root@kvm-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
[root@kvm-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kvm-node1 Ready 1h v1.9.0 kvm-master NotReady master 18h v1.9.0 kvm-node2 Ready 6m v1.9.0
curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://k8smaster:6443
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
https://github.com/kubernetes/kubernetes/issues/48378
解決方式: export KUBECONFIG=/etc/kubernetes/kubelet.conf 這是普通用戶,沒有權限,會有報錯 Error from server (Forbidden): daemonsets.extensions is forbidden: User "system:node:kvm-master" cannot list daemonsets.extensions in the namespace "default"
export KUBECONFIG=/etc/kubernetes/admin.conf 管理用戶
###因爲安全緣由,默認狀況下pod不會被schedule到master節點上,能夠經過下面命令解除這種限制:
kubectl taint nodes --all node-role.kubernetes.io/master-
先裝rbac
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml
下載calico.yaml
https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml
若是使用calico自帶etcd,注意保證calico etcd的穩定
若是使用自有etcd集羣,須要修改yaml裏的etcd_endpoints
默認安裝時,若是節點是多網卡會報錯,致使網絡不成功,calico 老是隨機綁定網卡,致使註冊失敗
calico 報錯日誌
Skipping datastore connection test
IPv4 address 10.96.0.1 discovered on interface kube-ipvs0
No AS number configured on node resource, using global value
須要修改calico.yaml,注意順序
- name: IP value: "autodetect" - name: IP_AUTODETECTION_METHOD value: "can-reach=192.168.1.1"
IP_AUTODETECTION_METHOD 參數說明
使用經過ip訪問的interface
can-reach=192.168.1.1
使用經過域名訪問的interface
can-reach=www.baidu.com
使用指定的interface
interface=ethx
下載yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
下載完成後修改imagename
name: kubernetes-dashboard image: foxchan/google_containers/kubernetes-dashboard-amd64:v1.8.0
kubectl proxy --address=masterip --accept-hosts='^*$'
訪問
http://masterip:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
下載yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
下載完成後,修改image爲本身的
#grafana.yaml - name: grafana image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3 #heapster.yaml - name: heapster image: gcr.io/google_containers/heapster-amd64:v1.4.2 #influxdb.yaml - name: influxdb image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
下載鏡像
docker pull foxchan/heapster-grafana-amd64:v4.4.3 docker pull foxchan/heapster-amd64:v1.4.2 docker pull foxchan/heapster-influxdb-amd64:v1.3.3
若是成功,dashboard頁面 會有圖形
下載腳本
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
利用腳本 修改coredns.yaml.sed配置文件
deploy.sh clusterip|kubectl apply
個人clusterip是10.96.0.0/12
./deploy.sh 10.96.0.0/12|kubectl apply -f
請確保coredns正常運行,而後就能夠刪除skydns
kubectl delete --namespace=kube-system deployment kube-dns
參數以下:
kubeadm init --feature-gates=CoreDNS=true
總的來講1.9 和1.8 沒什麼大的變化,如下是我關注的