本次實驗環境是5個節點 3臺master 2臺node節點:node
k8smaster01 192.168.111.128 軟件:etcd k8smaster haproxy keepalived k8smaster02 192.168.111.129 軟件:etcd k8smaster haproxy keepalived k8smaster03 192.168.111.130 軟件:etcd k8smaster haproxy keepalived k8snode01 192.168.111.131 軟件:k8snode k8snode02 192.168.111.132 軟件:k8snode VIP: 192.168.111.100
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config setenforce 0
# 臨時關閉swap
# 永久關閉 註釋/etc/fstab文件裏swap相關的行
swapoff -a
# 配置轉發相關參數,不然可能會出錯
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
# 加載ipvs相關內核模塊
# 若是從新開機,須要從新加載
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
lsmod | grep ip_vs
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum clean all && yum makecache sudo yum install -y yum-utils device-mapper-persistent-data lvm2
192.168.111.128 k8smaster01 192.168.111.129 k8smaster02 192.168.111.130 k8smaster03 192.168.111.131 k8snode01 192.168.111.132 k8snode02
v1.11.1版本推薦使用docker v17.03,v1.11,v1.12,v1.13, 也可使用,再高版本官網不推薦使用,可是能夠忽略。linux
這裏安裝18.06.0-cegit
yum -y install docker-ce systemctl enable docker && systemctl restart docker
yum install -y kubelet kubeadm kubectl ipvsadm systemctl enable kubelet && systemctl start kubelet
# 拉取haproxy鏡像 docker pull haproxy:1.7.8-alpine cat >/etc/haproxy/haproxy.cfg<<EOF global log 127.0.0.1 local0 err maxconn 5000 uid 99 gid 99 #daemon nbproc 1 pidfile haproxy.pid defaults mode http log 127.0.0.1 local0 err maxconn 5000 retries 3 timeout connect 5s timeout client 30s timeout server 30s timeout check 2s listen admin_stats mode http bind 0.0.0.0:1080 log 127.0.0.1 local0 err stats refresh 30s stats uri /haproxy-status stats realm Haproxy\ Statistics stats auth will:will stats hide-version stats admin if TRUE frontend k8s-https bind 0.0.0.0:8443 mode tcp #maxconn 50000 default_backend k8s-https backend k8s-https mode tcp balance roundrobin server k8smaster01 192.168.111.128:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 server k8smaster02 192.168.111.129:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 server k8smaster03 192.168.111.130:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 EOF # 啓動haproxy docker run -d --name my-haproxy \ -v /etc/haproxy:/usr/local/etc/haproxy:ro \ -p 8443:8443 \ -p 1080:1080 \ --restart always \ haproxy:1.7.8-alpine # 拉取keepalived鏡像 docker pull osixia/keepalived:1.4.4 # 啓動 # 載入內核相關模塊 lsmod | grep ip_vs modprobe ip_vs # 啓動keepalived # ens33爲本次實驗192.168.111.0/24網段的所在網卡 docker run --net=host --cap-add=NET_ADMIN \ -e KEEPALIVED_INTERFACE=ens33 \ -e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['192.168.111.100']" \ -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.111.128','192.168.111.129','192.168.111.130']" \ -e KEEPALIVED_PASSWORD=hello \ --name k8s-keepalived \ --restart always \ -d osixia/keepalived:1.4.4 # 此時會配置 192.168.111.100 到其中一臺機器 # ping測試 ping 192.168.111.100 # 若是失敗後清理後,從新實驗 #docker rm -f k8s-keepalived #ip a del 192.168.111.100/32 dev ens33
# 配置kubelet使用國內pause鏡像 # 配置kubelet的cgroups cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" EOF # 啓動 systemctl daemon-reload systemctl enable kubelet && systemctl restart kubelet
cd /etc/kubernetes # 生成配置文件 cat >kubeadm-master.config<<EOF apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers apiServerCertSANs: - "k8smaster01" - "k8smaster02" - "k8smaster03" - "192.168.111.128" - "192.168.111.129" - "192.168.111.130" - "192.168.111.100" - "127.0.0.1" api: advertiseAddress: 192.168.111.128 controlPlaneEndpoint: 192.168.111.100:8443 etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://192.168.111.128:2379" advertise-client-urls: "https://192.168.111.128:2379" listen-peer-urls: "https://192.168.111.128:2380" initial-advertise-peer-urls: "https://192.168.111.128:2380" initial-cluster: "k8smaster01=https://192.168.111.128:2380" serverCertSANs: - k8smaster01 - 192.168.111.128 peerCertSANs: - k8smaster01 - 192.168.111.128 controllerManagerExtraArgs: node-monitor-grace-period: 10s pod-eviction-timeout: 10s networking: podSubnet: 10.244.0.0/16 kubeProxy: config: mode: ipvs # mode: iptables EOF # 提早拉取鏡像 # 若是執行失敗 能夠屢次執行 kubeadm config images pull --config kubeadm-master.config # 初始化 # 注意保存返回的 join 命令 kubeadm init --config kubeadm-master.config # 初始化失敗時使用 #kubeadm reset # 將ca相關文件傳至其餘master節點
cd /etc/kubernetes/pki/ USER=root CONTROL_PLANE_IPS="k8smaster02 k8smaster03" for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd" scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/ scp etcd/ca.crt etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp ../admin.conf "${USER}"@$host:/etc/kubernetes/
done
kubeadm init
失敗解決:github
將阿里雲image tag成官方的image,便可解決init
失敗問題。(v1.11.0有此問題)docker
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
cd /etc/kubernetes # 生成配置文件 cat >kubeadm-master.config<<EOF apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers apiServerCertSANs: - "k8smaster01" - "k8smaster02" - "k8smaster03" - "192.168.111.128" - "192.168.111.129" - "192.168.111.130" - "192.168.111.100" - "127.0.0.1" api: advertiseAddress: 192.168.111.129 controlPlaneEndpoint: 192.168.111.100:8443 etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://192.168.111.129:2379" advertise-client-urls: "https://192.168.111.129:2379" listen-peer-urls: "https://192.168.111.129:2380" initial-advertise-peer-urls: "https://192.168.111.129:2380" initial-cluster: "k8smaster01=https://192.168.111.128:2380,k8smaster02=https://192.168.111.129:2380" initial-cluster-state: existing serverCertSANs: - k8smaster02 - 192.168.111.129 peerCertSANs: - k8smaster02 - 192.168.111.129 controllerManagerExtraArgs: node-monitor-grace-period: 10s pod-eviction-timeout: 10s networking: podSubnet: 10.244.0.0/16 kubeProxy: config: mode: ipvs # mode: iptables EOF # 配置kubelet kubeadm alpha phase certs all --config kubeadm-master.config kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config systemctl restart kubelet # 添加etcd到集羣中 export KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-k8smaster01 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.111.128:2379 member add k8smaster02 https://192.168.111.129:2380 kubeadm alpha phase etcd local --config kubeadm-master.config # 提早拉取鏡像 kubeadm config images pull --config kubeadm-master.config # 部署 kubeadm alpha phase kubeconfig all --config kubeadm-master.config kubeadm alpha phase controlplane all --config kubeadm-master.config kubeadm alpha phase mark-master --config kubeadm-master.config
cd /etc/kubernetes # 生成配置文件 cat >kubeadm-master.config<<EOF apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers apiServerCertSANs: - "k8smaster01" - "k8smaster02" - "k8smaster03" - "192.168.111.128" - "192.168.111.129" - "192.168.111.130" - "192.168.111.100" - "127.0.0.1" api: advertiseAddress: 192.168.111.130 controlPlaneEndpoint: 192.168.111.100:8443 etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://192.168.111.130:2379" advertise-client-urls: "https://192.168.111.130:2379" listen-peer-urls: "https://192.168.111.130:2380" initial-advertise-peer-urls: "https://192.168.111.130:2380" initial-cluster: "k8smaster01=https://192.168.111.128:2380,k8smaster02=https://192.168.111.129:2380,k8smaster03=https://192.168.111.130:2380" initial-cluster-state: existing serverCertSANs: - k8smaster03 - 192.168.111.130 peerCertSANs: - k8smaster03 - 192.168.111.130 controllerManagerExtraArgs: node-monitor-grace-period: 10s pod-eviction-timeout: 10s networking: podSubnet: 10.244.0.0/16 kubeProxy: config: mode: ipvs # mode: iptables EOF # 配置kubelet kubeadm alpha phase certs all --config kubeadm-master.config kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config systemctl restart kubelet # 添加etcd到集羣中 KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-k8smaster01 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.111.128:2379 member add k8smaster03 https://192.168.111.130:2380 kubeadm alpha phase etcd local --config kubeadm-master.config # 提早拉取鏡像 kubeadm config images pull --config kubeadm-master.config # 部署 kubeadm alpha phase kubeconfig all --config kubeadm-master.config kubeadm alpha phase controlplane all --config kubeadm-master.config kubeadm alpha phase mark-master --config kubeadm-master.config
rm -rf $HOME/.kube mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 下載配置 cd /etc/kubernetes mkdir flannel && cd flannel wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml # 修改配置 # 此處的ip配置要與上面kubeadm的pod-network一致 net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } # 修改鏡像 image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64 # 若是Node有多個網卡的話,參考flannel issues 39701, # https://github.com/kubernetes/kubernetes/issues/39701 # 目前須要在kube-flannel.yml中使用--iface參數指定集羣主機內網網卡的名稱, # 不然可能會出現dns沒法解析。容器沒法通訊的狀況,須要將kube-flannel.yml下載到本地, # flanneld啓動參數加上--iface=<iface-name> containers: - name: kube-flannel image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=ens33 # 啓動 kubectl apply -f kube-flannel.yml # 查看 kubectl get pods --namespace kube-system kubectl get svc --namespace kube-system
如下上master生成的,與你環境可能不符合
kubeadm join 192.168.111.100:8443 --token uf9oul.7k4csgxe5p7upvdb --discovery-token-ca-cert-hash sha256:36bc173b46eb0545fc30dd5db2d27dab70a257bd406fd791647d991a69454595
node節點報錯處理辦法:json
tail -f /var/log/message
在kubelet配置文件追加如下配置centos
/etc/sysconfig/kubelet
api
# Append configuration in Kubelet --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
這樣一個集羣環境配置完成裏,其他的是本身添加附件吧。網絡