系統:centos7 將以下內容添加至/etc/hosts zdtest1 192.168.3.1/16 # master & node1 zdtest2 192.168.3.2/16 # master & node2 |
一、下載flannel、etcd、kubernetes
node
1. 下載最新版本的 flannel(全部的文件均放在/home/k8s目錄下,master和node均需安裝flannel)linux wget https://github.com/coreos/flannel/releases/download/v0.5.4/flannel-0.5.4-linux-amd64.tar.gz tar -zvxf flannel-0.5.4-linux-amd64.tar.gz cd flannel-0.5.4 cp -rp flanneld /usr/local/bin/ cp -rp mk-docker-opts.sh /usr/local/bin/ 2. 下載最新版本的etcdnginx curl -L https://github.com/coreos/etcd/releases/download/v2.2.1/etcd-v2.2.1-linux-amd64.tar.gz -o etcd-v2.2.1-linux-amd64.tar.gz tar xzvf etcd-v2.2.1-linux-amd64.tar.gz cd etcd-v2.2.1-linux-amd64 cp -rp etcd* /usr/local/bin/ 3. 下載最新版本的kubernetesgit wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.6/kubernetes.tar.gz cd /home/k8s/ tar -zvxf kubernetes.tar.gz cd /home/k8s/kubernetes/server/ tar -zvxf kubernetes-server-linux-amd64.tar.gz cd /home/k8s/kubernetes/server/kubernetes/server/bin/ cp -rp hyperkube /usr/local/bin/ cp -rp kube-apiserver /usr/local/bin/ cp -rp kube-controller-manager /usr/local/bin/ cp -rp kubectl /usr/local/bin/ cp -rp kubelet /usr/local/bin/ cp -rp kube-proxy /usr/local/bin/ cp -rp kubernetes /usr/local/bin/ cp -rp kube-scheduler /usr/local/bin/ 將kubelet文件和kube-proxy文件複製至node服務器下的/usr/local/bin/目錄 rsync -avzP /usr/local/bin/kubelet root@192.168.3.2:/usr/local/bin/ rsync -avzP /usr/local/bin/kube-proxy root@192.168.3.2:/usr/local/bin/ |
二、配置防火牆規則github
node1執行docker iptables -I INPUT -s 192.168.3.2/24 -p tcp --dport 8080 -j ACCEPT iptables -I INPUT -s 192.168.3.2/24 -p tcp --dport 4001 -j ACCEPT iptables -I INPUT -s 192.168.3.2/24 -p tcp --dport 7001 -j ACCEPT iptables -I INPUT -s 192.168.3.2/24 -p tcp --dport 8888 -j ACCEPT node2執行centos iptables -I INPUT -s 192.168.3.1/24 -p tcp --dport 10250 -j ACCEPT iptables -I INPUT -s 192.168.3.1/24 -p udp --dport 8285 -j ACCEPT |
三、啓動etcd、flannel、dockerapi
啓動master上的etcdbash nohup etcd --listen-peer-urls http://0.0.0.0:7001 --data-dir=/var/lib/etcd \ --listen-client-urls http://0.0.0.0:4001 \ --advertise-client-urls http://zdtest1:4001 >> /var/log/etcd.log 2>&1 & 啓動flannel服務器 mkdir -p /var/log/flanneld; nohup flanneld --listen=0.0.0.0:8888 >> /var/log/flanneld/flanneld.log 2>&1 & etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' node一、node2上執行: mkdir -p /var/log/flanneld; nohup flanneld -etcd-endpoints=http://zdtest1:4001 -remote=zdtest1:8888 >> /var/log/flanneld/flanenlnode.log 2>&1 & source /run/flannel/subnet.env 啓動docker(需先安裝docker) 刪掉docker0網卡後手動啓動docker ip a del 172.17.0.1/16 dev docker0 docker -d -H unix:///var/run/docker.sock >> /var/log/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}; systemctl start docker.service; |
四、啓動kubernetes(此處的kubernetes是帶驗證模式,非無密碼模式)
master上啓動kube-apiserver、kuber-controller-manager、kube-scheduler(第一啓動kube-apiserver時可能沒有key和crt,去掉配置重啓一次便可) mkdir -p /var/log/kubernetes; nohup kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://zdtest1:4001 \ --address=0.0.0.0 --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/24 \ --tls-cert-file=/var/run/kubernetes/apiserver.crt \ --tls-private-key-file=/var/run/kubernetes/apiserver.key \ --admission_control=LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota >> /var/log/kubernetes/kube-apiserver.log 2>&1 & nohup kube-controller-manager --logtostderr=true --master=http://zdtest1:8080 \ --v=0 --node-monitor-grace-period=10s --pod-eviction-timeout=10s \ --root-ca-file=/var/run/kubernetes/apiserver.crt \ --service_account_private_key_file=/var/run/kubernetes/apiserver.key \ >> /var/log/kubernetes/kube-controller-manager.log 2>&1 & nohup kube-scheduler --logtostderr=true --v=0 --master=http://zdtest1:8080 >> /var/log/kubernetes/kube-scheduler.log 2>&1 & NamespaceLifecycle,NamespaceExists配置了的話會出現Namespace kube-system does not exist錯誤 (須要自行建立kube-system的namespace) |
五、啓動node一、node2上的kubectl、kube-proxy
mkdir -p /var/log/kubernetes; nohup kubelet --logtostderr=true --v=0 --api_servers=http://zdtest1:8080 --address=0.0.0.0 --allow_privileged=false \ --tls-cert-file=/var/run/kubernetes/kubelet.crt --tls-private-key-file=/var/run/kubernetes/kubelet.key \ --config=/etc/kubernetes/manifests/ \ --cluster_dns=10.254.0.120 --cluster_domain=zdtest.com > /var/log/kubernetes/kubelet.log 2>&1 & nohup kube-proxy --logtostderr=true --v=0 --master=http://zdtest1:8080 > /var/log/kubernetes/kube-proxy.log 2>&1 & |
六、經過kubectl運行第一個pod
提早下載好nginx的p_w_picpath和pause的p_w_picpath,不然會報錯(pause負責管理pod的網絡等相關事務) docker pull nginx;docker pull gcr.io/google_containers/pause:0.8.0 谷歌被牆了,能夠經過×××或訪問國內的docker鏡像站點(如時速雲、靈雀雲等)下載下來更名 kubectl run my-nginx --p_w_picpath=nginx --replicas=2 --port=80 kubectl get pods -o wide 分別得到兩個容器的ip(kubectl exec pod名稱 執行命令) kubectl exec my-nginx-wlwpw ip a|grep 10.1 |
七、容器之間的互訪
訪問建立好的2個nginx容器。 |