2019.09.11 fbhtml
環境準備
-
準備三個節點 一個master 兩個nodenode
- 192.168.122.193 master
- 192.168.122.194 node 01
- 192.168.122.195 node02
-
修改主機名linux
- hostname查看主機名
- 修改 /etc/sysconfig/network
- NETWORKING=yes HOSTNAME=主機名
-
配置DNSgit
-
1 vi /etc/sysconfig/network-scripts/ifcfg-eth0 添加 NM_CONTROLLED=no 重啓網絡服務 systemctl restart network 2.配置DNS 修改NetworkManager.conf 配置文件 vim /etc/NetworkManager/NetworkManager.conf 在[main]中添加 dns=no 修改resolv.conf配置文件 vim /etc/resolv.conf 添加 #主DNS服務器 阿里dns nameserver 223.5.5.5 #備DNS服務器 nameserver 114.114.114.114 重啓NetworkManager systemctl restart NetworkManager
-
-
配置阿里雲yum源github
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-
經過NTP完成各節點時間同步docker
- 安裝軟件
- yum install chrony
- 同步時間
- systemctl start chronyd
- systemctl enable chronyd
- 查看時間同步源
- chronyc sources -v chronyc sourcestats -v
- 安裝軟件
-
配置hosts 完成dns解析shell
-
cat /etc/hostsbootstrap
[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.122.193 master 192.168.122.194 node01 192.168.122.195 node02 [root@master ~]#
-
-
關閉防火牆vim
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [root@master ~]#
-
關閉selinuxcentos
-
修改配置
vi /etc/selinux/config #SELINUX=enforcing SELINUX=disabled SELINUXTYPE=targeted
-
重啓
reboot
-
查看狀態
[root@master ~]# getenforce Disabled [root@master ~]#
-
-
禁用swap
-
swapoff -a
-
卸載swap分區
sudo vim /etc/fstab 註釋掉 swap 這一行
-
reboot
-
-
ssh互信
ssh-keygen -t rsa cd /root/.ssh 把各節點的 pub文件內容追加到 authorized_keys 文件中 把 authorized_keys 文件分發到各個節點的 /root/.ssh
安裝Docker程序包
-
安裝dockerce
-
配置yum源
cd /etc/yum.repos.d wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
-
安裝dockerce
yum install docker-ce -y
-
爲docker設置阿里雲加速器
點擊容器鏡像服務 點擊加速器 參考 https://www.cnblogs.com/zhxshseu/p/5970a5a763c8fe2b01cd2eb63a8622b2.html
-
修改 /usr/lib/systemd/system/docker.service
[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # Enviroment="HTTPS_PROXY=http://www.ik8s.io:10070" <----新加的代理配置 # Environment="NO_PROXY=127.0.0.0/8,192.168.122.0/24" <----新加的 不代理 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT <----新加的 防火牆策略
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
sysctl -p /etc/sysctl.d/k8s.conf
-
docker 設置開機自啓動
systemctl enable docker
-
安裝kubeadm
-
配置yum源
cd /etc/yum.repos.d vim kubernetes.repo [kubernetes] name=Kubernetes Repository baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
-
檢查配置是否正確
yum repolist 正確顯示 [root@master yum.repos.d]# yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com repo id repo name status base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,019 docker-ce-stable/x86_64 Docker CE Stable - x86_64 56 extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 435 updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 2,500
-
軟件安裝(master節點)
yum install kubeadm kubectl kubelet
初始化集羣
初始化master
swap啓用狀況處理的方法
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
安裝master
-
查看初始化參數
kubeadm config print init-defaults [root@master ~]# kubeadm config print init-defaults apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master taints: - effect: NoSchedule key: node-role.kubernetes.io/master ------ apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io <-- 默認使用的倉庫 注意項!!! kind: ClusterConfiguration kubernetesVersion: v1.15.0 <-- 版本 注意項!!! networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {} [root@master ~]#
-
幹跑測試
kubeadm init --kubernetes-version="1.15.3" --pod-network-cidr="10.244.0.0/16" --dry-run
-
各節點提早下載鏡像
查看鏡像 [root@master fb]# kubeadm config images list W0913 11:06:24.804608 6687 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0913 11:06:24.804748 6687 version.go:99] falling back to the local client version: v1.15.3 k8s.gcr.io/kube-apiserver:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-scheduler:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 [root@master fb]# [root@master fb]# [root@master fb]# [root@master fb]# [root@master fb]# [root@master fb]# [root@master fb]# [root@node-2 opt]# cat k8s-image-download.sh #!/bin/bash # liyongjian5179@163.com # download k8s 1.15.3 images # get image-list by 'kubeadm config images list --kubernetes-version=v1.15.3' # gcr.azk8s.cn/google-containers == k8s.gcr.io if [ $# -ne 1 ];then echo "please user in: ./`basename $0` KUBERNETES-VERSION" exit 1 fi version=$1 images=`kubeadm config images list --kubernetes-version=${version} |awk -F'/' '{print $2}'` for imageName in ${images[@]};do docker pull gcr.azk8s.cn/google-containers/$imageName docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName docker rmi gcr.azk8s.cn/google-containers/$imageName done [root@node-2 opt]#./k8s-image-download.sh 1.15.3 參考 https://www.cnblogs.com/liyongjian5179/p/11417794.html 下載完鏡像查看鏡像 [root@master fb]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 3 weeks ago 207MB k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 3 weeks ago 81.1MB k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 3 weeks ago 159MB k8s.gcr.io/coredns 1.3.1 eb516548c180 8 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB [root@master fb]#
- 部署主節點
kubeadm init --kubernetes-version="1.15.3" --pod-network-cidr="10.244.0.0/16"
-
部署完成
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 \ --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb [root@master fb]#
-
建立配置項
cd mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
安裝flannel組件
安裝flannel組件前 [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 5m1s v1.15.3 [root@master ~]#
安裝flannel組件 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安裝flannel完成後,節點狀態變爲Ready,節點狀態正常了。 [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 13m v1.15.3 [root@master ~]#
systemctl enable kubelet
初始化node節點
-
安裝軟件
yum install kubeadm kubectl
-
加入到集羣中
kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 \ --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb
[root@node01 ~]# kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 \ > --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node01 ~]# docker pull gcr.azk8s.cn/google-containers/pause:3.1 docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi gcr.azk8s.cn/google-containers/pause:3.1 docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3 docker rmi gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 docker pull quay.io/coreos/flannel:v0.11.0-amd64
說明
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09 該版本認證的docker 版本爲 18.09,當前版本爲 19.03.2
部署完成
-
查看結果
[root@master ~]# kubectl -n kube-system get nodes NAME STATUS ROLES AGE VERSION master Ready master 3h48m v1.15.3 node01 Ready <none> 49m v1.15.3 node02 Ready <none> 22m v1.15.3 [root@master ~]#
說明
master節點須要的鏡像列表
[root@master fb]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 3 weeks ago 207MB k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 3 weeks ago 159MB k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 3 weeks ago 81.1MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 7 months ago 52.5MB k8s.gcr.io/coredns 1.3.1 eb516548c180 8 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB [root@master fb]#
node節點須要的鏡像列表
[root@node02 home]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 7 months ago 52.5MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB [root@node02 home]#