Kubernetes系列二: 使用kubeadm安裝k8s環境

環境

三臺主機,一臺master,兩臺nodenode

192.168.31.11  k8s-1 做爲master
192.168.31.12  k8s-2 做爲node節點
192.168.31.13  k8s-3 做爲node節點

 

每臺主機Centos版本使用 CentOS Linux release 7.6.1810 (Core)linux

軟件版本信息:git

docker-ce-selinux-17.03.3.ce-1.el7.noarch
docker-ce-17.03.2.ce-1.el7.centos.x86_64
kubelet-1.11.1-0.x86_64
kubeadm-1.11.1-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.11.1-0.x86_64

 

1.主機的基本設置

修改主機名github

# 192.168.31.11
hostnamectl set-hostname k8s-1
# 192.168.31.12
hostnamectl set-hostname k8s-2
# 192.168.31.13
hostnamectl set-hostname k8s-3

 

修改每臺主機的hosts文件添加以下內容docker

192.168.31.11 k8s-1
192.168.31.12 k8s-2
192.168.31.13 k8s-3

 

每臺主機都系統時間同步json

# 修改時區
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

yum -y install ntpdate
ntpdate cn.pool.ntp.org

 

每臺主機都關閉防火牆及selinuxcentos

systemctl disable firewalld
systemctl stop firewalld
setenforce 0

# 修改文件設置selinux永久關閉
vi /etc/sysconfig/selinux
SELINUX=enforcing

 

關閉swap分區(若是建立主機時刪除swap分區,這裏不用修改)api

swapoff -a
# 修改文件防止下次啓動掛載swap
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

 

2.配置安裝前環境

如下操做在每臺主機上都須要操做網絡

每臺主機都設置docker-ce源信息app

# 下載wget
yum -y install wget

# 下載源repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

# 修改源到
sed -i 's@download.docker.com@mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo

 

每臺主機都設置Kubernetes倉庫

# vi /etc/yum.repos.d/kubernetes.repo 添加以下內容
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable=1

 

更新yum倉庫

yum clean all
yum repolist

 

每臺主機都須要安裝docker-ce包(master上面有些服務是須要運行在docker容器中)

# 須要先升級docker-ce-selinux,版本過低
yum -y install https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

# 安裝docker-ce
yum -y install docker-ce-17.03.2.ce

 

設置並啓動docker服務

# 配置加速器到配置文件
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF

# 啓動服務
systemctl daemon-reload
systemctl enable docker
systemctl start docker

# 打開iptables內生的橋接相關功能
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

 

3.master上面安裝並啓動服務

安裝須要的rpm包

# 先安裝 kubelet-1.10.10
# 緣由是由於直接安裝kubelet1.11.1的話會致使kubernetes-cni版本太高,
# 這裏安裝kubelet1.10.10會依賴安裝kubernetes-cni-0.6.0版本
# 暫時沒有找到好的辦法解決這個問題
yum -y install kubelet-1.10.10

# 安裝須要的版本1.11.10
yum -y install kubeadm-1.11.1 kubelet-1.11.1 kubectl-1.11.1

 

配置啓動kubelet服務

vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs

# 配置爲開機啓動
systemctl enable kubelet

 

提早拉取鏡像(kubeadm在初始化過程會拉取鏡像,可是鏡像都是Google提供,沒法下載)

docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1

docker pull xiyangxixia/k8s-scheduler:v1.11.1
docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1

docker pull xiyangxixia/k8s-controller-manager:v1.11.1
docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1

docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1
docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1

docker pull xiyangxixia/k8s-etcd:3.2.18
docker tag xiyangxixia/k8s-etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18

docker pull xiyangxixia/k8s-coredns:1.1.3
docker tag xiyangxixia/k8s-coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1

docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x
docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x

docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le
docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l

docker pull xiyangxixia/k8s-flannel:v0.10.0-arm
docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm

docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

 

初始化Kubernetes master節點

kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

# --kubernetes-version=v1.11.1 由於沒法鏈接Google服務因此須要手動指定版本信息
# --pod-network-cidr=10.244.0.0/16 pod獲取的ip的網段,默認便可
# --service-cidr=10.96.0.0/12 指定service網段,不要和物理服務節點同一個網段
# --ignore-preflight-errors=Swap/all 忽略錯誤

 

初始化成功後,根據提示信息執行操做

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

 

等待全部服務啓動(docker中運行)後,使用命令查詢信息

# 查詢組件信息
kubectl get cs
# 查詢節點信息(flannel尚未部署,提示爲NotReady)
kubectl get nodes
# 查詢命名空間
kubectl get ns

 

部署網絡插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 查看下載的images
docker images | grep flannel

 

再次驗證節點信息,顯示Ready

kubectl get nodes

 

4.節點node上安裝並啓動服務

兩個節點操做徹底相同

安裝須要的rpm包,確保防火牆,selinux關閉並設置永久關閉,docker-ce安裝並啓動

# 先安裝 kubelet-1.10.10
# 緣由是由於直接安裝kubelet1.11.1的話會致使kubernetes-cni版本太高,
# 這裏安裝kubelet1.10.10會依賴安裝kubernetes-cni-0.6.0版本
# 暫時沒有找到好的辦法解決這個問題
yum -y install kubelet-1.10.10

# kubectl 其實沒有必要安裝
yum -y install kubeadm-1.11.1 kubelet-1.11.1 kubectl-1.11.1

 

配置啓動kubelet服務

vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false" KUBE_PROXY=MODE=ipvs # 配置爲開機啓動 systemctl enable kubelet

在master上面操做,建立token

# 查看當前全部的token
kubeadm token list

# 建立token,並記下
kubeadm token create

[root@k8s-1 .kube]# kubeadm token create
8d5cbr.n84orohakj3o5ppd


# 若是沒有值--discovery-token-ca-cert-hash,能夠經過在master節點上運行如下命令鏈來獲取
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
   
[root@k8s-1 .kube]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
>    openssl dgst -sha256 -hex | sed 's/^.* //'
febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53

 

提早下載鏡像(初始化過程當中會拉取鏡像,可是下載不了)

docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1

docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1

docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

 

根據建立的token初始化node並加入集羣

kubeadm join 192.168.31.11:6443 --token 8d5cbr.n84orohakj3o5ppd --discovery-token-ca-cert-hash sha256:febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53 --ignore-preflight-errors=Swap

# 192.168.31.11:6443 這裏是master的ip地址,master必須關閉防火牆,關閉selinux
# --token 這裏是kubeadm token create 輸入的值
# --discovery-token-ca-cert-hash sha256: 這裏是執行命令獲取的sha256的值

 

5.驗證集羣是否初始化成功

查詢節點鏡像是否下載

[root@k8s-1 .kube]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                     v0.11.0-amd64       ff281650a721        2 months ago        52.5 MB
k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        8 months ago        97.8 MB
xiyangxixia/k8s-proxy-amd64                v1.11.1             d5c25579d0ff        8 months ago        97.8 MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        8 months ago        187 MB
xiyangxixia/k8s-apiserver-amd64            v1.11.1             816332bd9d11        8 months ago        187 MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        8 months ago        155 MB
xiyangxixia/k8s-controller-manager         v1.11.1             52096ee87d0e        8 months ago        155 MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        8 months ago        56.8 MB
xiyangxixia/k8s-scheduler                  v1.11.1             272b3a60cd68        8 months ago        56.8 MB
xiyangxixia/k8s-coredns                    1.1.3               b3b94275d97c        10 months ago       45.6 MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        10 months ago       45.6 MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        12 months ago       219 MB
xiyangxixia/k8s-etcd                       3.2.18              b8df3b177be2        12 months ago       219 MB
quay.io/coreos/flannel                     v0.10.0-s390x       463654e4ed2d        14 months ago       47 MB
xiyangxixia/k8s-flannel                    v0.10.0-s390x       463654e4ed2d        14 months ago       47 MB
quay.io/coreos/flannel                     v0.10.0-ppc64l      e2f67d69dd84        14 months ago       53.5 MB
xiyangxixia/k8s-flannel                    v0.10.0-ppc64le     e2f67d69dd84        14 months ago       53.5 MB
xiyangxixia/k8s-flannel                    v0.10.0-arm         c663d02f7966        14 months ago       39.9 MB
quay.io/coreos/flannel                     v0.10.0-arm         c663d02f7966        14 months ago       39.9 MB
quay.io/coreos/flannel                     v0.10.0-amd64       f0fad859c909        14 months ago       44.6 MB
xiyangxixia/k8s-flannel                    v0.10.0-amd64       f0fad859c909        14 months ago       44.6 MB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        15 months ago       742 kB
xiyangxixia/k8s-pause                      3.1                 da86e6ba6ca1        15 months ago       742 kB

 

查詢節點信息(master上執行)是否都是Ready

[root@k8s-1 .kube]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
k8s-1     Ready     master    14m       v1.11.1
k8s-2     Ready     <none>    4m        v1.11.1
k8s-3     Ready     <none>    4m        v1.11.1

 

查詢kube-system命名空間下的關於node節點的pod信息

[root@k8s-1 .kube]# kubectl get pods -n kube-system -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP              NODE
coredns-78fcdf6894-44qf5        1/1       Running   0          14m       10.244.0.2      k8s-1
coredns-78fcdf6894-bxb2m        1/1       Running   0          14m       10.244.0.3      k8s-1
etcd-k8s-1                      1/1       Running   0          13m       192.168.31.11   k8s-1
kube-apiserver-k8s-1            1/1       Running   0          13m       192.168.31.11   k8s-1
kube-controller-manager-k8s-1   1/1       Running   0          14m       192.168.31.11   k8s-1
kube-flannel-ds-amd64-cr8j8     1/1       Running   0          6m        192.168.31.11   k8s-1
kube-flannel-ds-amd64-kxk5w     1/1       Running   0          4m        192.168.31.12   k8s-2
kube-flannel-ds-amd64-pk4zl     1/1       Running   0          4m        192.168.31.13   k8s-3
kube-proxy-mxsrg                1/1       Running   0          4m        192.168.31.12   k8s-2
kube-proxy-tp95q                1/1       Running   0          4m        192.168.31.13   k8s-3
kube-proxy-twpvt                1/1       Running   0          14m       192.168.31.11   k8s-1
kube-scheduler-k8s-1            1/1       Running   0          14m       192.168.31.11   k8s-1
相關文章
相關標籤/搜索