IP | CPU | 內存 | hostname |
---|---|---|---|
192.168.198.200 | >=2c | >=2G | master |
192.168.198.201 | >=2c | >=2G | node1 |
192.168.198.202 | >=2c | >=2G | node2 |
本實驗我這裏用的VM是vmware workstation建立的,個人機器配置較低,因此master給了2G 2C,node每一個給了1G 2C,你們根據本身的資源狀況,按照上面給的建議最低值建立便可。
注意:hostname不能有大寫字母,好比Master這樣的。node
環境初始化操做
3.1 配置hostname
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
3.2 配置/etc/hosts
echo "192.168.198.200 master" >> /etc/hosts
echo "192.168.198.201 node1" >> /etc/hosts
echo "192.168.198.202 node2" >> /etc/hosts
3.3 關閉防火牆、selinux、swap
//停防火牆 systemctl stop firewalld systemctl disable firewalld
//關閉Selinux setenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
linux
//關閉Swap swapoff –a 註釋掉/etc/fstab的swap行 # /dev/mapper/centos-swap swap swap defaults 0 0
//加載br_netfilter modprobe br_netfilter
3.4 配置內核參數
//配置sysctl內核參數 cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF //生效文件 sysctl -p /etc/sysctl.d/k8s.conf //修改Linux 資源配置文件,調高ulimit最大打開數和systemctl管理的服務文件最大打開數 echo "* soft nofile 655360" >> /etc/security/limits.conf echo "* hard nofile 655360" >> /etc/security/limits.conf echo "* soft nproc 655360" >> /etc/security/limits.conf echo "* hard nproc 655360" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf
docker
配置CentOS YUM源
```
//配置國內tencent yum源地址、epel源地址、Kubernetes源地址
mkdir -p /etc/yum.repo.d/repo.bak
mv /etc/yum.repo.d/*.repo /etc/yum.repo.d/repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
yum clean all && yum makecachejson
//配置國內Kubernetes源地址
cat <
[kubernetes]
name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
```
安裝一些依賴軟件包
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl
後端
時間同步配置
yum install chrony –y systemctl enable chronyd.service && systemctl start chronyd.service && systemctl status chronyd.service chronyc sources
運行date命令看下系統時間,過一下子時間就會同步。centos
配置節點間ssh互信
配置ssh互信,那麼節點之間就能無密訪問,方便之後操做
ssh-keygen //每臺機器執行這個命令, 一路回車便可 ssh-copy-id node //到master上拷貝公鑰到其餘節點,這裏須要輸入 yes和密碼
api
以上操做後,所有重啓一下。bash
設置docker yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
服務器
列出docker版本
yum list docker-ce --showduplicates | sort -r
安裝docker 指定18.06.1
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker
tee /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"], "graph": "/tol/docker-data" } EOF
啓動docker
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
//安裝工具 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes //啓動kubelet systemctl enable kubelet && systemctl start kubelet
初始化獲取要下載的鏡像列表
使用kubeadm來搭建Kubernetes,那麼就須要下載獲得Kubernetes運行的對應基礎鏡像,好比:kube- proxy、kube-apiserver、kube-controller-manager等等 。那麼有什麼方法能夠得知要下載哪些鏡像 呢?從kubeadm v1.11+版本開始,增長了一個kubeadm config print-default 命令,可讓咱們方便 的將kubeadm的默認配置輸出到文件中,這個文件裏就包含了搭建K8S對應版本須要的基礎配置環境。另 外,咱們也能夠執行 kubeadm config images list 命令查看依賴須要安裝的鏡像列表。
[root@master ]# kubeadm config images list W0806 17:29:06.709181 130077 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0806 17:29:06.709254 130077 version.go:99] falling back to the local client version: v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1
//執行這個命令就生成了一個kubeadm.conf文件 kubeadm config print init-defaults > kubeadm.conf
繞過牆下載鏡像方法
這個配置文件默認會從google的鏡像倉庫地址k8s.gcr.io下載鏡像,下載不了。所以,咱們經過下面的方法把地址改爲國內的,好比用阿里的:
vim kubeadm.conf ... imageRepository: registry.aliyuncs.com/google_containers //地址 kind: ClusterConfiguration kubernetesVersion: v1.15.1 //版本 ...
下載須要用到的鏡像
kubeadm.conf修改好後,咱們執行下面命令就能夠自動從國內下載須要用到的鏡像了
kubeadm config images pull --config kubeadm.conf
docker tag 鏡像
鏡像下載好後,還須要tag下載好的鏡像,讓下載好的鏡像都是帶有 k8s.gcr.io 標識的,若是不打tag變成k8s.gcr.io,那麼後面用kubeadm安裝會出現問題,由於kubeadm裏面只認 google自身的模式。打tag後刪除帶有 registry.aliyuncs.com 標識的鏡像。下面把操做寫在腳本里。
```
#/bin/bash
# 打tag
docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
查看下載的鏡像列表
[root@master ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 weeks ago 159MB k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 weeks ago 207MB k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 weeks ago 82.4MB k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 weeks ago 81.1MB k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB
kubeadm init 初始化master節點
kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=192.168.198.200
這裏咱們定義POD的網段爲: 172.22.0.0/16 ,api server就是master本機IP地址。
初始化成功後,最後會顯示以下
```
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef....
``kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef.... `
最後這個記錄下,到時候添加node的時候要用到。
[root@master ~]# ll /etc/kubernetes/ 總用量 36 -rw------- 1 root root 5451 8月 5 15:12 admin.conf -rw------- 1 root root 5491 8月 5 15:12 controller-manager.conf -rw------- 1 root root 5459 8月 5 15:12 kubelet.conf drwxr-xr-x 2 root root 113 8月 5 15:12 manifests drwxr-xr-x 3 root root 4096 8月 5 15:12 pki -rw------- 1 root root 5435 8月 5 15:12 scheduler.conf
驗證測試
配置kubectl命令
mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config
執行獲取pods列表命令,查看相關狀態
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-rrpgm 0/1 Pending 0 5d18h kube-system coredns-5c98db65d4-xg5cc 0/1 Pending 0 5d18h kube-system etcd-master 1/1 Running 0 5d18h kube-system kube-apiserver-master 1/1 Running 0 5d18h kube-system kube-controller-manager-master 1/1 Running 0 5d18h kube-system kube-proxy-8vf84 1/1 Running 0 5d18h kube-system kube-scheduler-master 1/1 Running 0 5d18h
其中coredns pod處於Pending狀態,這個先無論
也能夠執行 kubectl get cs 查看集羣的健康狀態
[root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
下載calico 官方鏡像
這裏要下載三個鏡像,分別是calico-node:v3.1.四、calico-cni:v3.1.四、calico-typha:v3.1.4
直接運行 docker pull 下載便可
docker pull calico/node:v3.1.4 docker pull calico/cni:v3.1.4 docker pull calico/typha:v3.1.4
tag 這三個calico鏡像
docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4 docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4 docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4
刪除原有鏡像
docker rmi calico/node:v3.1.4 docker rmi calico/cni:v3.1.4 docker rmi calico/typha:v3.1.4
部署calico
4.1 下載執行rbac-kdd.yaml文件
curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O
kubectl apply -f rbac-kdd.yaml
4.2 下載、配置calico.yaml文件
curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/policy-only/1.7/calico.yaml -O
把ConfigMap 下的 typha_service_name 值由none變成 calico-typha
kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas # below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is # essential. #typha_service_name: "none" #before typha_service_name: "calico-typha" #after
設置 Deployment 類目的 spec 下的replicas值爲1
apiVersion: apps/v1beta1 kind: Deployment metadata: name: calico-typha namespace: kube-system labels: k8s-app: calico-typha spec: # Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the # typha_service_name variable in the calico-config ConfigMap above. # # We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential # (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade. #replicas: 0 #before replicas: 1 #after revisionHistoryLimit: 2 template: metadata:
4.3 定義POD網段
咱們找到CALICO_IPV4POOL_CIDR,而後值修改爲以前定義好的POD網段,我這裏是172.22.0.0/16
- name: CALICO_IPV4POOL_CIDR value: "172.22.0.0/16"
4.4 開啓bird模式
把 CALICO_NETWORKING_BACKEND 值設置爲 bird ,這個值是設置BGP網絡後端模式
- name: CALICO_NETWORKING_BACKEND #value: "none" value: "bird"
4.5 部署calico.yaml文件
上面參數設置調優完畢,咱們執行下面命令完全部署calico
kubectl apply -f calico.yaml
下載安裝鏡像(在node上執行)
node上也是須要下載安裝一些鏡像的,須要下載的鏡像爲:kube-proxy:v1.1三、pause:3.一、calico-node:v3.1.四、calico-cni:v3.1.四、calico-typha:v3.1.4
1.1 下載鏡像,打tag,刪除原鏡像
```
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker pull calico/node:v3.1.4
docker pull calico/cni:v3.1.4
docker pull calico/typha:v3.1.4
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4
docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4
docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/pause:3.1
docker rmi calico/node:v3.1.4
docker rmi calico/cni:v3.1.4
docker rmi calico/typha:v3.1.4
1.2 把node加入集羣裏 在node上運行 `kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s \ --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef....` 運行完後,咱們在master節點上運行 kubectl get nodes 命令查看node是否正常
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 5d19h v1.15.1
node1 Ready
node2 Ready
```
到此,k8s就部署完成了。